WO2022221994A1 - 具有接口系统的事件驱动集成电路 - Google Patents
具有接口系统的事件驱动集成电路 Download PDFInfo
- Publication number
- WO2022221994A1 WO2022221994A1 PCT/CN2021/088143 CN2021088143W WO2022221994A1 WO 2022221994 A1 WO2022221994 A1 WO 2022221994A1 CN 2021088143 W CN2021088143 W CN 2021088143W WO 2022221994 A1 WO2022221994 A1 WO 2022221994A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- event
- module
- driven
- address
- interface system
- Prior art date
Links
- 238000012545 processing Methods 0.000 claims abstract description 50
- 238000000034 method Methods 0.000 claims abstract description 48
- 230000010076 replication Effects 0.000 claims abstract description 43
- 230000008569 process Effects 0.000 claims abstract description 33
- 230000004927 fusion Effects 0.000 claims abstract description 29
- 238000001914 filtration Methods 0.000 claims description 30
- 238000013507 mapping Methods 0.000 claims description 15
- 238000012421 spiking Methods 0.000 claims description 10
- 230000008878 coupling Effects 0.000 claims description 8
- 238000010168 coupling process Methods 0.000 claims description 8
- 238000005859 coupling reaction Methods 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 7
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 claims description 6
- 229910052710 silicon Inorganic materials 0.000 claims description 6
- 239000010703 silicon Substances 0.000 claims description 6
- 239000011521 glass Substances 0.000 claims description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 4
- 238000011144 upstream manufacturing Methods 0.000 claims description 4
- 238000012952 Resampling Methods 0.000 claims description 3
- 230000008521 reorganization Effects 0.000 claims description 3
- 238000000926 separation method Methods 0.000 claims description 3
- 238000012536 packaging technology Methods 0.000 claims description 2
- 230000007704 transition Effects 0.000 claims description 2
- 230000035939 shock Effects 0.000 claims 1
- 238000004519 manufacturing process Methods 0.000 abstract description 13
- 238000013461 design Methods 0.000 abstract description 12
- 230000000694 effects Effects 0.000 abstract description 11
- 230000006870 function Effects 0.000 abstract description 11
- 238000005070 sampling Methods 0.000 abstract description 10
- 230000008901 benefit Effects 0.000 abstract description 4
- 230000002829 reductive effect Effects 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000007781 pre-processing Methods 0.000 description 5
- 239000000047 product Substances 0.000 description 5
- 230000014509 gene expression Effects 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 230000002411 adverse Effects 0.000 description 2
- 230000008094 contradictory effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005265 energy consumption Methods 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 210000000225 synapse Anatomy 0.000 description 1
- 230000000946 synaptic effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/47—Image sensors with pixel address output; Event-driven image sensors; Selection of pixels to be read out based on image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/78—Architectures of general purpose stored program computers comprising a single central processing unit
- G06F15/7807—System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/57—Control of the dynamic range
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/79—Arrangements of circuitry being divided between different or multiple substrates, chips or circuit boards, e.g. stacked image sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
Definitions
- the present invention relates to an event-driven integrated circuit, and in particular to a low-power integrated circuit with an interface module for asynchronously processing events.
- Event-driven sensors exist in the prior art.
- One class of event-driven sensors is an event-driven camera, which includes an array of pixels with pixels. According to the change of the brightness of the pixel, the event-driven camera generates an event, where the event includes an identifier of the change, such as -1 for darker, +1 for brighter, such cameras are called dynamic vision sensors (DVS). , Dynamic vision sensors).
- DVS dynamic vision sensors
- Dynamic vision sensors There are other known event-driven sensors, such as one-dimensional sensors, sound sensors.
- DVS generates events in an asynchronous manner, and it is an event-driven sensor.
- Traditional clock-based cameras need to read all frames or lines of all pixels, which cannot be compared with DVS.
- DVS provides ultra-fast image processing while still maintaining a low rate because DVS only records changes.
- processing pipeline refers specifically to wiring, such as interconnections between different components, but also to data processing by components and data transfer between different components.
- processing pipeline also specifically refers to the particular manner in which various output ports or various first components of the system are connected to various input ports or various second components of the system.
- the magazine Science first introduced IBM's brain-inspired chip TrueNorth, which has 5.4 billion transistors, 4,096 synaptic cores, and 1 million programmable Spiking neurons, 2.56 million configurable synapses.
- the chip structure adopts an event-driven design and is an asynchronous-synchronous hybrid chip: the routing, scheduler, and controller adopt a quasi-delay non-sensitive clockless asynchronous design, while the neuron is a traditional clock synchronization.
- the clock is generated by an asynchronous controller, and the global clock frequency is 1kHz. If calculated with a video input of 30 frames per second and 400*240 pixels, the power consumption of the chip is 63mW.
- the power consumption of the chip is 63mW.
- Prior art 1 "A million spiking-neuron integrated circuit with a scalable communication network and interface", Paul A.Merolla,John V.Arthur etal,Vol.345,Issue 6197,SCIENCE,8Aug 2014.
- the gesture recognition system based on the IBM TrueNorth chip was disclosed: refer to Fig-1 and Fig-4 of this article (or Fig. 11 of the present invention, the details of this figure can be found in Referring to the original text, which is not repeated in the present invention), the TrueNorth processor located on the NSIe development board receives events output from the DVS128 through USB2.0. In other words, the connection between the DVS and the processor is via a USB cable. For details, please refer to:
- Prior art 2 "A Low Power, Fully Event-Based Gesture Recognition System", Arnon Amir, Brian Taba et al, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 21-26 July 2017.
- Prior art 3 "A hybrid and scalable brain-inspired robotic platform", Zhe Zou, Rong Zhao etal, Scientific Reports, 23 Oct 2020.
- USB cables or other cable implementation technologies
- all have a certain connection length so the system may suffer from signal loss and noise interference coupled to the cables, and Because it is a cable, each handshake between devices, such as necessary communication before and after data transmission, consumes more energy and slows down the processing speed of the system, which adversely affects the performance of brain-like chips.
- these top designers are not aware of the adverse effects of this factor and believe that the proposed technical solutions have exhausted all efforts to pursue extreme low power consumption, and the solutions have satisfied various requirements.
- DVS in order to obtain better quality image information, those skilled in the art need to choose a special semiconductor process to manufacture DVS , such as CIS-CMOS process image sensor.
- an AI processor such as the sCNN processor described later
- CMOS process is not suitable for manufacturing high-quality image sensors (imaging effects are not ideal).
- the AI processor will occupy a large amount of chip area and increase chip costs, which is more and more important for the pursuit of more and more In terms of the development trend of small chip footprint, it will lose commercial competitiveness. Therefore, how to eliminate signal loss and noise interference, preferably to further pursue lower chip footprint and manufacturing cost, is an important problem to be solved for the industrialization/commercialization of brain-like chips.
- the present invention is proposed to solve one of the above technical problems or a combination of several technical problems, and the technical solution of the invention can solve or alleviate the combination of one or more of the above technical problems.
- the technology mentioned in the above background technology may belong to or not disclosed technology in whole or in part, that is, the applicant does not admit that the technology mentioned in the background technology must belong to the existing technology in the sense of the patent law. technology (prior art), unless there is substantial evidence to prove it.
- the technical solutions and technical features disclosed in the above-mentioned background art have been disclosed along with the disclosure of the patent documents of the present invention.
- An integrated circuit comprising an event-driven sensor (10) and an event-driven interface system (20) and an event-driven processor (30), the event-driven sensor (10) and the event-driven interface system (20) and the The event-driven processor (30) is coupled to the single chip (3).
- the event-driven sensor (10) is configured to be asynchronous after the input device (11) of the event-driven sensor (10) detects an event-generating signal or/and a change in the event-generating signal generating and asynchronously outputting an event (100) comprising or associated with an event address indicative of said input device (11), the output of said event-driven sensor (10) coupled to said event-driven interface an input of the system (20);
- the event-driven interface system (20) is configured to asynchronously receive the event (100) and preprocess the received event (100), and the output end of the event-driven interface system (20) is coupled to the event-driven interface system (20) an input end of the event-driven processor (30);
- the event-driven processor (30) is configured to: receive an event (101) preprocessed by the event-driven interface system (20), and process the received event (101) in an asynchronous manner;
- the event-driven sensor (10), the event-driven interface system (20) and the event-driven processor (30) are coupled to a single chip (3) through a transition board (40).
- both the event-driven interface system (20) and the event-driven processor (30) are located in the first die (1-1); or, the event-driven sensor (10) and all The event-driven interface system (20) is both located on the second die (1-2); or, a part of the event-driven interface system (20) and the event-driven processor (30) are located on the first die (1-2). 1-1) and the other part of the event-driven interface system (20) and the event-driven sensor (10) are both located in the second die (1-2).
- both the event-driven interface system (20) and the event-driven processor (30) are located in the first die (1-1), and the event-driven sensor (10) is located in the first die (1-1).
- Two die (1-2) are stacked on the event-driven interface system (20) and the first die (1-1) where the event-driven processor (30) is located.
- the interposer (40) is a silicon interposer or a glass interposer.
- the event-driven sensor (10) and the event-driven interface system (20) and the event-driven processor (30) are packaged on a single chip (3) by 2.5D or 3D packaging technology above.
- the event-driven sensor (10) is of one or a combination of one or more of the following types: point sensor, 1D sensor, 2D sensor.
- the event-driven sensor (10) is of one or a combination of the following types: sound/vibration sensor, dynamic vision sensor.
- the event-driven processor (30) is configured with a spiking neural network.
- the event-driven processor (30) is configured with a spiking convolutional neural network.
- the first die and the second die are fabricated using different processes.
- the event-driven interface system (20) includes at least one interface module (200), the interface module (200) constitutes a programmable daisy-chain form, and asynchronous processing from the event-driven sensor (200) 10) Received event (100).
- the at least one interface module (200) includes a replication module (201) configured to: receive an event (100) and perform a replication operation to obtain a replication event (100c), the event (100) ) from the event-driven sensor (10) or from another interface module (200) of the event-driven interface system (20) (which is a fusion module in certain types of embodiments), and sends the replication event (100c) ) to an external processing pipeline and send the event (100) along the daisy chain.
- a replication module (201) configured to: receive an event (100) and perform a replication operation to obtain a replication event (100c), the event (100) ) from the event-driven sensor (10) or from another interface module (200) of the event-driven interface system (20) (which is a fusion module in certain types of embodiments), and sends the replication event (100c) ) to an external processing pipeline and send the event (100) along the daisy chain.
- the at least one interface module (200) includes a fusion module (202) configured to receive events (100, 100e) from at least two different places, wherein the events (100 ) from other interface modules (200) of the event-driven interface system (20) (which in certain types of embodiments are replication modules) or the event-driven sensor (10); the event (100e) also components/modules or other event-driven sensors from the integrated circuit or other integrated circuits and send some or all of the received events (100, 100e) along the programmable daisy chain to subsequent interface modules ( 200).
- a fusion module (202) configured to receive events (100, 100e) from at least two different places, wherein the events (100 ) from other interface modules (200) of the event-driven interface system (20) (which in certain types of embodiments are replication modules) or the event-driven sensor (10); the event (100e) also components/modules or other event-driven sensors from the integrated circuit or other integrated circuits and send some or all of the received events (100, 100e) along the programmable da
- the at least one interface module (200) includes a subsampling module (203) configured to assign a single address to a number of events (100) received.
- the subsampling module (203) comprises a separation module (203-4) configured to route the event (100) to the associated said event (100) according to the address value of the received event (100).
- the address reassembly module (203-5) of (203), the address reassembly module (203-5) being configured to adjust the event address according to the scaled address value and then send the adjustment along the programmable daisy chain Event after address (100).
- the at least one interface module (200) includes an area of interest module (204) configured to: adjust attributes of at least one event address, the adjusting manner including one or more of the following Kind: Shift, flip, transpose, or/and rotate at least one attribute of the event address; or/and
- the at least one interface module (200) includes an event routing module (205) configured to receive the event (100), add header information to the received event (100), and connect the received event (100) with the The header information of the event (100) sends the event (100) to the event-driven processor (30) or/and other event-driven processors or other processing pipelines.
- an event routing module (205) configured to receive the event (100), add header information to the received event (100), and connect the received event (100) with the The header information of the event (100) sends the event (100) to the event-driven processor (30) or/and other event-driven processors or other processing pipelines.
- the at least one interface module (200) includes a rate control module configured to send only a portion of the events (100) along the programmable daisy chain after a maximum speed is exceeded, To limit the rate of events not to exceed the maximum rate.
- the at least one interface module (200) includes a mapping module (206) configured to map one event address to another event address.
- mapping module (206) includes one or a combination of the following:
- a region of interest module a lookup table module, a flip or/and rotation module; wherein the flip or/and rotation module is configured to flip or/and rotate the event address of the event (100).
- the at least one interface module (200) includes an event address rewriting module (207) configured to convert the received event address into a uniform address format, whereby in the available A uniform event address format is passed on the programming daisy chain.
- the at least one interface module (200) includes an event address filtering module (208) configured to filter out a series of events (100) having a particular selected event address.
- the event address filtering module (208) is specifically a hot pixel filtering module (208'), which is configured to filter events (100) with specific event addresses, and pass them through the CAM memory (208'). -3) Store a preset list of event addresses to be filtered.
- any one or more interface modules (200) of the event-driven interface system (20) may be bypassed by programmable switches.
- the present invention also provides an event-driven interface system (20), which is coupled to an event-driven sensor (10) and an event-driven processor (30) to form an integrated circuit, the event-driven sensor (10) Generating and asynchronously outputting an event (100), said event (100) including or being associated with an event address of an input device (11) on said event-driven sensor (10) indicating that the event was generated; said event-driven interface system (20) Comprising at least one interface module (200), the interface module (200) forms a programmable daisy-chain form and asynchronously processes events (100) received from the sensor (10).
- the at least one interface module (200) includes one or more of the following: a replication module (201), a fusion module (202), a subsampling module (203), a region of interest module (204) and an event routing module (205); where:
- the replication module (201) configured to: receive an event (100) and perform a replication operation to obtain a replication event (100c), the event (100) from the event-driven sensor (10) or from the event driving other interface modules (200) of the interface system (20) and sending said replication events (100c) to external processing pipelines and sending said events (100) along said daisy chain;
- the fusion module (202) is configured to receive events (100, 100e) from at least two different places, wherein the events (100) are from other interface modules of the event-driven interface system (20) (200) or said event-driven sensor (10); said event (100e) also originates from said integrated circuit or a component/module of other integrated circuit or other event-driven sensor, and along said programmable daisy chain sending part or all of the received event (100, 100e) to the subsequent interface module (200);
- the subsampling module (203) configured to: assign a single address for a number of events (100) received;
- the region of interest module (204) configured to:
- Adjusting the attribute of at least one event address includes one or more of the following ways: shifting, flipping, transposing or/and rotating the attribute of at least one event address; or/and
- the at least one interface module (200) has the following interface module coupling sequence:
- a replication module (201), a fusion module (202), a subsampling module (203), a region of interest module (204), and an event routing module (205); or
- a fusion module (202), a replication module (201), a subsampling module (203), a region of interest module (204), and an event routing module (205).
- the event (100) comes from other interface modules (200) of the event-driven interface system (20), specifically the fusion module (202); Or/and for the fusion module (202), the event (100) comes from other interface modules (200) of the event-driven interface system (20), specifically the replication module (201).
- the upstream of the interface module coupling sequence further includes: an event address rewriting module (207) or/and an event address filtering module (208); wherein the event address rewriting module (207) is configured To: convert the received event address into a unified address format, thereby transmitting the unified event address format on the programmable daisy chain;
- the event address filtering module (208) therein is configured to filter out a series of events (100) having a particular selected event address.
- the event (100) is first processed by the event address rewriting module (207) and then processed by the event address filtering module (208).
- the event address filtering module (208) is specifically a hot pixel filtering module (208'), which is configured to:
- Events with specific event addresses are filtered (100), and a preset list of event addresses to be filtered is stored through the CAM memory (208'-3).
- the at least one interface module (200) further includes a mapping module (206) including one or a combination of: an area of interest module, a lookup table module, flip or /and rotation module; wherein the flip or/and rotation module is configured to flip or/and rotate the event address of the event (100).
- a mapping module (206) including one or a combination of: an area of interest module, a lookup table module, flip or /and rotation module; wherein the flip or/and rotation module is configured to flip or/and rotate the event address of the event (100).
- the at least one interface module (200) further includes a rate control module configured to send only a portion of the events (100) along the programmable daisy chain after a maximum speed is exceeded , to limit the rate of events to not exceed the maximum rate.
- any one or more interface modules (200) of the event-driven interface system (20) may be bypassed by programmable switches.
- the event-driven interface system (20) and the event-driven processor (30) through an adapter board (40) coupled to a single Chip (3); or are fabricated in the same die.
- An event-driven interface system which can transmit events efficiently, flexibly, and with low power consumption, and provides an event preprocessing function for a processor to process events efficiently and conveniently.
- FIG. 1 is a schematic diagram of an event-driven circuit system according to a certain class of embodiments of the invention.
- FIG. 2 is a schematic diagram of a circuit system according to another embodiment of the invention.
- FIG. 3 is a cross-sectional view of a chip in accordance with an embodiment of the invention.
- FIG. 4 is a schematic cross-sectional view of a 3D chip geometry according to an embodiment of the invention.
- FIG. 5 is a schematic diagram of a circuit system including a sound/vibration sensor according to an embodiment of the invention.
- Fig. 6 is the event processing flow chart of sensor production
- Fig. 7 is the process flow chart of sound sensor recording vibration signal event
- FIG. 8 is a schematic diagram of a daisy chain of an interface system
- FIG. 9 is a schematic diagram of a hot pixel filtering module having the ability to filter events with selected event addresses
- FIG. 10 is a schematic diagram of a sub-sampling module
- FIG. 11 is a gesture recognition system based on IBM TrueNorth in the prior art.
- the embodiments of the method class and the product class may describe some technical features separately.
- the present document implies that the corresponding embodiments of other classes also have the corresponding technical features/match/correspond to the technical features.
- the matching device/step is just not clearly described in text.
- method-type embodiments implicitly include certain steps/instructions/functions performed/implemented by a device/module/component in product-type embodiments
- product-type embodiments implicitly include implementing method-type embodiments to execute certain steps/instructions /function device/module/component etc.
- module refers to a product or part of a product that is implemented solely by hardware, only by software, or by a combination of software and hardware. Unless clearly indicated by the context, it is not implied in the present invention that the above-mentioned terms can only be implemented by hardware or software.
- the multi-scheme description methods "A and/or B”, “A or/and B”, “A and/or B”, and “A or/and B” all include three parallel technical schemes: (1) A; (2) A and B; (3) B.
- the meaning to be expressed is: the technical solution is allowed to be implemented with errors on the premise of not affecting the solution of the technical problem , that is, it is not required that after strict actual parameter measurement, the obtained data strictly conform to the general mathematical definition (because there is no physical entity that fully conforms to the mathematical definition), this term is not ambiguous and ambiguous, leading to unclear technical solutions
- the limited/expressed range should, in fact, be based on whether the technical problem can still be solved as the criterion for judging whether it falls within the limited range.
- B corresponding to A means that B is associated with A, and B can be determined according to A. However, it should also be understood that determining B according to A does not mean that B is only determined according to A, and B may also be determined according to A and/or other information.
- first, second, etc. are usually used to identify and distinguish objects, but this does not constitute a limitation on the number of objects of the same type. Although it usually refers to a single object, it does not mean that there is only one object of this type, for example, it may be for the purpose of effect enhancement, pressure sharing, and equivalent replacement.
- the field described in the scheme belongs to event-driven integrated circuit systems, so its sensors, interface systems, and processors are all event-driven.
- integrated circuit system and “integrated circuit” have basically the same meaning
- interface system and “interface circuit” have basically the same meaning
- system here has the meaning of product attribute.
- coupled refers to an electrical/electrical connection between two or more components in this relationship.
- An event-driven integrated circuit system 1 includes an event-driven sensor 10 (hereinafter referred to as a sensor), an event-driven interface system 20 (hereinafter referred to as an interface system) and an event-driven processor 30 (hereinafter referred to as a processor).
- a sensor event-driven sensor 10
- an interface system event-driven interface system 20
- an event-driven processor 30 hereinafter referred to as a processor
- event-driven sensor 10 event-driven interface system 20
- event-driven processor 30 are all divided according to functions for convenience of description, but this does not mean that the above components are necessarily physically independent. They can be implemented as three separate independent components, or multiple components can be combined together to achieve the integration/combination of multiple functions in a single component, such as the sensor 10 and the interface system 20 can be combined together (especially The interface system 20 and the processor 30 can be combined together on the same die (also called a bare chip, a bare chip). Perhaps some of the selection manners will reduce performance in a certain aspect, but the present invention does not limit the physical division form thereof.
- a certain type of embodiment of the present invention is to combine the above event-driven sensor 10, event-driven interface system 20, event-driven processing
- At least three components of the device 30 are coupled to a single chip (not shown in FIG. 1 ).
- the three components described above are coupled to a single chip (in the case of only a single die).
- the sensor 10 and the processor 30 use the same fabrication process, such as a conventional 65nm CMOS process, but this suboptimal solution comes at the expense of the image quality of the sensor 10.
- the interposer includes but is not limited to: silicon interposer and glass interposer.
- the present invention does not limit the material type of the adapter plate.
- single chip is meant to include more than one die coupled through an interposer, or to include only one die without the need for an interposer. It should be noted that in some cases the context of the term may imply/restrict that the term represents only one of the above meanings.
- event-driven sensor 10 is an event-driven 2D array sensor (such as an event-driven camera), and sensor 10 typically includes one or more event-driven sensor input devices 11 .
- An event-driven camera includes a large number of pixels, and each pixel is an event-driven input device 11 .
- the input device 11 of the event-driven sensor is configured to asynchronously generate an event upon detection of an event-generating signal or/and a change in the event-generating signal, such as a change in light intensity on a pixel.
- each event is associated or includes an event address that includes/indicates an identifier of the input device 11, such as the X and Y coordinates of the pixels in the 2D array.
- the event-driven sensor is a 1D, 2D, 3D or other type of sensor.
- the sensor 10 is coupled to the input 21 of the event-driven interface system 20 through the output 12 of the sensor 10, and the sensor 10 outputs the event 100 in an asynchronous manner.
- interface system 20 may include a series of interface modules 200, where each interface module 200 is configured to process incoming events 100 in a programmable manner. In this way, all events 100 processed by the processor 30 have the same format from the perspective of a unified event address structure and possible event headers.
- the interface module 200 may be configured to include performing 1) a filtering step; or/and 2) an address manipulation step.
- a filtering step or/and 2) an address manipulation step.
- an address manipulation step to limit the incident event rate to within the processing capability of the processor 30, or/and to provide a predefined event address format.
- the interface system 20 includes a series/multiple parallel inputs 21 (eg, 21-1, 21-2) for receiving events 100 from the sensor 10, and also includes a series/multiple parallel outputs 22 (eg, 21-1, 21-2). Such as 22-1, 22-2) are coupled to the input of the processor 30, and are configured to transmit the preprocessed events 101 to the processor 30 in a parallel manner. This setup allows multiple events to be transmitted simultaneously, resulting in reduced power consumption and fast event processing.
- Figure 2 shows an alternative embodiment of some kind.
- the event-driven sensor 10 is a 1D event-driven sensor, such as an event-driven mechanical pressure sensor, which is used to detect mechanical vibrations.
- sensor 10 , interface system 20 and processor 30 are assembled on a single chip 3 and coupled through an interposer board 40 .
- processor 10, interface system 20, and processor 30 are coupled to the same side of chip 3; in another class of embodiments, processor 10, interface system 20, and processor 30 are coupled on both sides of chip 3.
- the present invention does not limit whether the above-mentioned three components are assembled on the same side.
- sensor 10 and interface system 20 are in the same die; while in another class of embodiments, interface system 20 and processor 30 are in the same die.
- the senor 10 may also be a point sensor, in which case the event addresses of the point sensors are all the same address.
- the sound sensor type in some embodiments, it may include at least two sound sensors in different physical positions to realize stereo sound effect collection.
- this event-driven system is designed in such a way that the application power consumption is extremely low, and it can operate asynchronously. Especially suitable for application scenarios based on battery power supply and long working time.
- the processor 30 is configured as an event-driven spiking artificial neural network (or simply an event-driven spiking neural network, also known in the art as a spiking neural network SNN).
- the SNN includes a variety of network algorithms.
- the above-mentioned neural network is configured as an event-driven convolutional neural network (sCNN), which is particularly suitable for ultra-fast application requirements, such as object recognition.
- the specific implementation of the sCNN can at least refer to the prior art (PCT patent application document, title: Event-driven spiking convolutional neural network, publication date: 15, Oct, 2020):
- the event-driven processor 30 configured as an event-driven spiking neural network or sCNN, combined with the circuit-integrated geometry, can further accommodate the needs of long-term, low-power application scenarios.
- the integrated circuit system thus configured can output only relevant information including detected objects, such as "[table]", “[chair]", “[** is approaching]” and the like. Compared with traditional technology, it does not need to record and upload a large amount of data information, avoid the information transmission delay of connecting cloud data, massive computing power requirements, power consumption, and is very suitable for low power consumption, low latency, and low data. Application scenarios of storage pressure and long battery life, such as IoT and edge computing.
- the average power consumption of the solution adapted to 64*64 DVS under the 65nm process is as low as 0.1mW, and the peak power consumption is only 1mW; the average power consumption of the solution adapted to 128*128 DVS As low as 0.3mW, peak power consumption is only 3mW.
- the processor 30 includes at least two (or more) processors, in particular each processor is configured to perform a different task. Processor 30 is also configured to process events asynchronously.
- a C4 bump 43 may be used to connect the interposer board 40 , and the interposer board 40 is provided with a number of through holes 42 .
- the adapter board 40 is provided with a number of micro-bumps 41, and two bare chips are arranged on the micro-bumps 41: a first bare die (or a first integrated circuit structure) 1-1, a second bare die (or Called the second integrated circuit structure) 1-2.
- the coupling of the first die 1-1 and the second die 1-2 can be realized by some optional specific means such as micro bumps ( ⁇ bumps) 41 and through holes 42 .
- both the event-driven interface system (20) and the event-driven processor (30) are located on the first die (1-1).
- both the event-driven sensor (10) and the event-driven interface system (20) are located on the second die (1-2).
- a portion of the event-driven interface system (20) and the event-driven processor (30) are both located on the first die (1-1) and another portion of the event-driven interface system (20) and the event-driven sensor (10) are all located in the second bare die (1-2).
- the through holes 42 in the present invention include but are not limited to: through silicon vias (TSVs) and through glass vias (TGVs).
- the different dies described above are coupled through Cu-Cu technology.
- the adapter plate 40 of the present invention includes, but is not limited to, a silicon adapter plate or a glass adapter plate.
- Figure 4 is a cross-sectional view of a 3D chip geometry in certain types of embodiments.
- the interface system 20 and the processor 30 are mounted on the first die 1-1
- the sensor 10 is mounted on the second die 1-2, which implements the first circuit through the interposer board 40 Possibility of spatial isolation of structure 1-1 and second circuit structure 1-2.
- the adapter plate 40 includes a plurality of through holes 42 therein.
- Micro-bumps 41 are included between the first bare die 1-1 and the interposer 40, and the interface system 20 and the processor 30 are assembled on the same bare die, that is, the first bare die 1-1, through the micro-bumps 41 or/ and vias 42 and the like can achieve electrical/electrical coupling between the interface system 20 and the processor 30 .
- a through hole 42 is provided in the interface system 20, and between the second die 1-2 where the sensor 10 is located and the first die 1-1 are coupled through micro bumps 41, so the second die 1- 2 is stacked over the first die 1-1.
- the chip with the 3D structure arranged in this way greatly reduces the occupied area of the chip, improves the information transfer speed between different components, and reduces the overall power consumption of the system.
- the feature size/technology node of the second die 1-2 and the first die 1-1 may be different, for example, the process of the second die 1-2 It can be larger than 65nm, and the process of interface system and processor is smaller than 65nm, such as 22/14/10/7/5nm or smaller. This allows selecting a more cost-effective manufacturing process in the manufacturing process of the chip, and the present invention does not limit the selection and combination.
- the processor 30 and the interface system 20 may also use the same manufacturing process and be fabricated on the same integrated circuit structure/die or on different integrated circuit structures.
- the system includes a sound/vibration sensor 10' connected to an event-driven amplifier 13 and configured to amplify the sensor signal and output a sound/vibration power spectrum indicative of recording ) events 100 of varying intensity, especially where the events are generated asynchronously for each frequency in the energy spectrum.
- the amplifier 13 is connected to an interface system 20 , wherein the interface system 20 is configured to process any event 100 generated by the amplifier 13 and pass the processed event to the processor 30 .
- the sensor 10' and the amplifier 13 are arranged on the same chip, which can achieve the advantage of maintaining a single chip structure.
- the event-driven sensor 10 is of one or a combination of one or more of the following types: point sensor, 1D sensor, 2D sensor.
- the event-driven sensor 10 is of one or a combination of the following types: sound/vibration sensor, dynamic vision sensor.
- FIG. 6 A certain type of embodiment is depicted in FIG. 6 , showing a flowchart for processing an event 100 generated by a sensor 10 in the circuitry 1 .
- the preprocessing event 100 is implemented through the interface system 20 , preprocessed and output to the processor 30 .
- upstream refers to the side closer to the sensor
- downstream refers to the side closer to the processor, both of which are related to the sequence of event processing.
- the replication module 201 is located in the event routing module. 205 upstream.
- each geometry between the sensor 10 and the processor 30 represents at least one processing step, which is performed by the interface system 20 .
- the interface system 20 includes a series of interface modules 200 (including but not limited to 201, 202, 203, 204, 205, 206, 207, 208, 208', etc.).
- the interface modules 200 can be independently programmable, and any one or more interface modules (200) can be bypassed by programmable switches to support more network configurations.
- the interface module 200 may be specifically implemented as a hardware circuit and constitute a programmable daisy chain configured to asynchronously process incoming events from the sensor 10 in an event-driven and asynchronous manner 100.
- the first interface module 201 is a copy module, which is configured at a subsequent stage of the sensor 10 and includes an input 201-1 coupled to the sensor
- the output terminal 10-1 of 10 further includes a first output terminal 201-2a and a second output terminal 201-2b.
- the input 201-1 of the replication module 201 is configured to receive the event 100 and perform a replication operation, ie to replicate the received addressed event 100, and is configured to forward/forward the replicated event 100c and its replicated
- the event address is sent to the first output 201-2a of the replication module 201, and the received event 100 is sent to the second output 201-2b of the replication module 201 along the daisy chain.
- Replication events 100c may be fed into external processing pipelines (not shown) of different systems.
- the replication module 201 allows coupling into other systems before the event is processed in any way.
- the merge module 202 is coupled to the replication module 201, and the first input terminal 202-1a of the fusion module 202 is coupled to the second output terminal 201-1b of the replication module 201.
- the fusion module 202 is also provided with a second input 202-1b, which is configured to receive events 100e from other components/modules, which may come from this circuit system or other circuit systems (not shown), such as the first Events for two sensor or daisy chain outputs.
- the coupling module 202 also includes an output 202-2 that is configured to transmit all received events 100 or/and events 100e along the daisy chain.
- the event 100e is merged into the stream formed by the event 100, so the event 100e and the event 100 will not be distinguished and collectively referred to as the event 100 thereafter.
- This embodiment allows more components or/and information to be integrated in the processing pipeline of the circuit system, with more flexible system configuration capabilities.
- Sub-sampling module 203 (sub-sampling/sum pooling module): The output terminal 202-2 of the fusion module 202 is coupled to the input terminal 203-1 of the sub-sampling module 203.
- the resampling module 203 is configured to assign a single address to a plurality of received events 100, so that the number of different event addresses can be reduced. In this way, several event addresses representing several different pixels 11 of the 2D array sensor 10, for example, can be subsampled to actually fewer pixels.
- the process of subsampling, in some specific application scenarios, is called binning.
- the subsampling module 203 may be bypassed by a programmable switch (not shown).
- Region Of Interest module 204 (Region Of Interest module, ROI): the output end 203-2 of the subsampling module 203 is coupled to the input end 204-1 of the region of interest module 204, wherein the region of interest module 204 is configured to adjust (adjust) The property (property) of at least one event address, specifically the adjustment method can be shifted, flipped (flip), swapped (swap) or/and rotated (rotate) at least one property of the event address, within the ROI module.
- the operation performed may be a rewrite operation through the event address.
- the region of interest module 204 may be further configured to discard events whose address attribute values are outside the programmable address attribute value range.
- the region of interest module 204 is programmable and is configured to store programmable ranges of address attribute values as described above, the range of address attribute values being set for each address attribute.
- the area of interest module 204 is also configured to send received events 100, as long as these events 100 are not discarded, to the next level along the daisy chain along with the adjusted addresses.
- the region of interest module 204 allows manipulation of cropped images or/and other basic geometrical operations at event 100 pixel coordinates.
- Event routing module 205 the input terminal 205 - 1 of the event routing module 205 is coupled to the output terminal 204 - 2 of the area of interest module 204 to receive the event 100 .
- the event routing module 205 is configured to: optionally, associate header information for the received event 100, and output the event 100 together with its header information to the first output terminal 205- 2a.
- the event routing module 205 is configured to replicate the event 100 including the header information and the adjusted event address, and output the replicated event 100 together with the replicated header information to the second output 205 of the event routing module 205 -2b, the second output 205-2b can be coupled to other event-driven processors or other processing pipelines.
- the event routing module 205 thus configured thus adds to the circuitry 1 the ability to provide preprocessing information from events for any type of processor, processor, or processor with any type of input format required by a running program.
- the first output 205-2a of the event routing module 205 is coupled to the processor 30, and the processed event 100 with the event address and header information is then passed to the processor 30 and executed for processing tasks, such as modes or features Identify tasks or other applications.
- the circuitry 1 may further include other interface modules 200 that may be configured to perform tasks on events 100 received from the sensor 10 or the like, such as rate control tasks, hot pixel filtering tasks, events Address rewrite task (refer to Figure 8-9).
- other interface modules 200 may be configured to perform tasks on events 100 received from the sensor 10 or the like, such as rate control tasks, hot pixel filtering tasks, events Address rewrite task (refer to Figure 8-9).
- Rate control module is configured to limit the rate of events not to exceed the maximum rate. For example: especially rate limiting with the same event address; when the maximum speed is exceeded, only fractions of events are sent along the daisy chain, such as every nth received event will not be sent along the daisy chain, n is a value determined according to the current event rate; the maximum rate may be programmable and adjustable in a memory, which may belong to the rate control module, or may come from outside the module.
- the rate control module may include or be connected to a processing unit with a clock for determining the event reception rate of the module. The event rate on such a daisy chain will not exceed the maximum rate, while also limiting the rate of data fed to processor 30.
- interface module 200 which is programmable, it is obvious that this interface module can easily be bypassed by issuing appropriate commands to the module. For example, when coordinate flipping is not required, it can be realized by bypassing the entire region of interest module 204 , thus realizing the direct connection between the subsampling module 203 and the event routing module 205 .
- Figure 7 shows a certain type of embodiment when the sensor records a vibrating sound sensor.
- the sensor includes an amplifier (refer to Figure 5) and a filter, and the sensor is configured to generate events from the sound sensor, asynchronously encode the events 100 from a power spectrum recorded by the sensor. Additionally, the amplifier can be configured as a shift a channel. As a combined unit, the sound sensor and amplifier can be viewed collectively as an event-driven sensor 10 .
- the event-driven sensor 10 delivers the event 100 to the fusion module 202 , and the event 100e is delivered from a different processing pipeline and fused into the event 100 generated by the event-driven sensor 10 .
- the fused event 100 is then passed to the replication module 201, so the fused event 100 is replicated.
- Duplicate events 100c are then passed to the same processing pipeline, which is where event 100e is received at fusion module 202, or passed to other processing pipelines (not shown). The effect of this is to allow a great deal of freedom in designing or handling the daisy chain. It is not difficult to see that in this type of embodiment, many events 100, 100e may have been fed in early in the daisy chain.
- the advantage of the programmable modules of the interface system 20 including the daisy chain is that in the processor 30, based on the unified event format or/and event address format, the event 100 can be processed and the processor performs its intended purpose without further processing.
- Mapping module 206 (mapping module): between the sub-sampling module 203 responsible for pooling (pooling, meaning basically equivalent to sub-sampling) of the interface module 200, and the event routing module 205 responsible for routing events (refer to FIG. 6 ) ), the mapping module 206 (which itself includes the ROI module 204) may be placed in between for the purpose of enabling rich event address mapping operations.
- an interface module is/includes a mapping module (such as the mapping module 206 described above), wherein the mapping module 206 is configured to map one event address to another event address.
- the mapping module 206 includes one or a combination of the following:
- ROI 204 Region of interest module
- a flip or/and rotation module it is configured as an event address for flip or/and rotation events.
- all interface modules of the present invention can be bypassed by their internal programmable switches.
- FIG. 8 shows a daisy chain implemented by programmable interface modules 200, which are incorporated into interface system 20, in a certain type of embodiment.
- Event address rewrite module 207 Event driven sensor 10 provides a stream of events 10 which are sent to optional event address rewrite module 207 .
- the event address rewriting module 207 is configured to rewrite the event format into a common format for subsequent processing steps, namely converting the event address received from the sensor into a unified address format, thereby A unified event address format is passed on the chain.
- the event address rewrite module 207 may be programmed for a particular sensor model in order to accommodate virtually any type of sensor 10 that provides any event format.
- the format of the event may be related to the byte-order and format of the event address stored in the generated event.
- the unified event address format is a predefined data format, which realizes that subsequent processing can rely on the predefined data format, so the instance of double check (double check) about the event format can be omitted to achieve faster processing.
- Event address filter module 208 Once the normal events and event address formatting are completed by the event address rewrite module 207 , the events are further processed by the event address filter module 208 .
- the event address filtering module 208 is configured to filter out a series of events having a particular selected event address.
- the selected event address can be stored, read and written in CAM memory (Content-addressable memory). These filters allow filtering out hot pixels or the like. So the event address filtering module 208 may be a hot pixel filtering module.
- the event address filtering module 208 as the first module or a part thereof, can reduce the number of events transmitted on the daisy chain at an early stage, and this processing method will reduce the energy consumption of the daisy chain. If the processor 30 also has the ability to filter addresses, the event address rewrite module 207 can be bypassed.
- the filtered events 100 will then be sent to the replication module 201 or/and the fusion module 202, which can respectively implement replication events 100c providing sensors to external systems 100c and incorporating external event 100e sources.
- Replication module 201 and fusion module 202 can be bypassed independently by programming.
- the order of the two may have two different sequential processing orders: the copying module 201 and the fusion module 202 are firstly processed, or vice versa.
- the copying module 201 and the fusion module 202 are firstly processed, or vice versa.
- the subsampling module 203 After the duplication module 201 or/and the fusion module 202, the subsampling module 203 will process the input events in the manner of the previous embodiment. Placing the subsampling module 203 at a specific location in the daisy chain can handle all events, even if they originate externally.
- the region of interest module 204 is in the subsequent part of the sub-sampling module 203 and processes all events sent by the sub-sampling module 203 .
- the region of interest module 204 reduces the number of event addresses, thereby reducing the workload load.
- the region of interest module 204 may likewise be configured to flip or/and rotate the X, Y coordinates of the event address.
- an event routing module 205 Arranged subsequent to the area of interest module 204 is an event routing module 205 that is configured to prepare an event, such as providing header information for the event 10 , which is sent to the processor 30 .
- the daisy chain shown in FIG. 8 provides a unified approach for efficient, fast, and flexible processing of events from sensor 10 or other sources.
- Hot pixel filter module 208' (hot pixel filter module): FIG. 9 is a specific embodiment of a certain type of event address filter module 208: hot pixel filter module 208'.
- the function of the hot pixel filtering module 208' is to filter events with specific event addresses. This allows, for example, to reduce or completely remove such events on a daisy chain. Such events are removed because the pixels of an input device, such as a 2D array sensor, are compromised or damaged.
- the hot pixel filtering module 208' includes an input 208'-1 for receiving the event 100.
- the hot pixel filtering enable (enable) determination step S800 After receiving the event 100, after the hot pixel filtering enable (enable) determination step S800, if it is determined to be disabled (No, Disabled), the hot pixel filtering module 208' can be bypassed directly through the programming switch and the Event 100 is fed directly to output 208'-2 of 801 hot pixel filtering module 208'. If S800 is determined to be enabled (Yes, Enabled), preferably, the preset list of event addresses to be filtered is read from the CAM memory 208'-3. In the address comparison/matching step S802, it is verified whether the address of the event 100 belongs to one of the addresses to be filtered in the list.
- the event 100 will be filtered out S803 in the daisy chain (filtered out), and dropped (dropped). And if there is no address in the above list that matches the address of the event 100, then the event 100 will be output to the pipeline at the output 208'-2 of the hot pixel filtering module 208'.
- FIG. 10 shows the working flow chart of the sub-sampling module 203 in some embodiments.
- An incoming event 100 is processed by evaluating its associated address, particularly its address coordinates, such as X, Y, Z coordinates.
- the address of event 100 is split into three different addresses by splitting module 203-4 .
- the coordinates X, Y, Z are shifted or divided by operation S901
- the coordinates X, Y, and Z are sub-sampled into a coordinate set with a low data amount, which effectively reduces the number of pixel coordinates.
- the event addresses thus processed are then merged at the address reorganization module 203-5 , and then the adjusted addresses are sent to the subsequent stage along with the event 100 for further processing.
- the separation module 203-4 is configured to route the event 100 to a scaling register in the associated subsampling module according to the address value (eg X, Y, Z coordinates) of the received event, the scaling register being configured to split, subsample, pool or/and shift the received address value, and output the address value to an address reorganization module 203-5 in the subsampling module, the address reassembly module 203-5 being configured to The scaled address value adjusts the event address, then sends the adjusted event along the daisy chain.
- the address value eg X, Y, Z coordinates
- the subsampling module can adjust the pixel resolution of the 2D array sensor so that the number of pixels on the X,Y axis will be reduced.
- the resolution of the image processed by the processor can be from 256*256 to 64*64.
- it can also be processed by a similarly configured subsampling module as above.
- event 100 may also include a channel identifier, which is not processed separately as described above, but merely loops through S900 in subsampling module 203 alone.
- any module, component or device exemplified herein that executes instructions may include or otherwise have access to a non-transitory computer/processor readable storage medium or media for storing information, such as computer/processor readable instructions, data structures, program modules and/or other data. Any such non-transitory computer/processor storage medium may be part of or accessible or connectable to the device. Any application or module described herein may be implemented using computer/processor readable/executable instructions, which may be stored or otherwise maintained by such a non-transitory computer/processor readable storage medium.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Neurology (AREA)
- Advance Control (AREA)
- Multi Processors (AREA)
- Testing Or Calibration Of Command Recording Devices (AREA)
Abstract
Description
Claims (38)
- 一种集成电路,其包括事件驱动传感器(10)和事件驱动接口系统(20)和事件驱动处理器(30),其特征在于:所述事件驱动传感器(10)和所述事件驱动接口系统(20)以及所述事件驱动处理器(30)耦接于单芯片(3)。
- 根据权利要求1所述的集成电路,其特征在于:所述事件驱动传感器(10),被配置为:在所述事件驱动传感器(10)的输入设备(11)检测到事件产生信号或/和事件产生信号的变化后异步生成和异步输出事件(100),所述事件(100)包括或被关联指示所述输入设备(11)的事件地址,所述事件驱动传感器(10)的输出端耦接至所述事件驱动接口系统(20)的输入端;所述事件驱动接口系统(20),被配置为:异步接收所述事件(100)并对所接收的事件(100)预处理,所述事件驱动接口系统(20)的输出端耦接至所述事件驱动处理器(30)的输入端;所述事件驱动处理器(30),被配置为:接收所述事件驱动接口系统(20)预处理后的事件(101),并以异步的方式处理所接收的事件(101);所述事件驱动传感器(10)和所述事件驱动接口系统(20)以及所述事件驱动处理器(30)之间通过转接板(40)而耦接于单芯片(3)。
- 如权利要求2所述的集成电路,其特征在于:所述事件驱动接口系统(20)以及所述事件驱动处理器(30)均位于第一裸晶(1-1);或,所述事件驱动传感器(10)和所述事件驱动接口系统(20)均位于第二裸晶(1-2);或,所述事件驱动接口系统(20)的一部分与所述事件驱动处理器(30)均位于第一裸晶(1-1)且所述事件驱动接口系统(20)的另一部分与所述事件驱动传感器(10)均位于第二裸晶(1-2)。
- 如权利要求2所述的集成电路,其特征在于:所述事件驱动接口系统(20)以及所述事件驱动处理器(30)均位于第一裸晶(1-1),且所述事件驱动传感器(10)所在的第二裸晶(1-2)堆叠在所述事件驱动接口系统(20)以及所述事件驱动处理器(30)所在的第一裸晶(1-1)之上。
- 如权利要求2所述的集成电路,其特征在于:所述转接板(40)是硅转接板或玻璃转接板。
- 如权利要求2所述的集成电路,其特征在于:所述事件驱动传感器(10)和所述事件驱动接口系统(20)以及所述事件驱动处理器(30)是通过2.5D或3D封装技术封装于单芯片(3)之上。
- 如权利要求2所述的集成电路,其特征在于:所述事件驱动传感器(10)属于以下类型中的一种或多种的组合:点传感器、1D传感器、2D传感器、3D传感器。
- 如权利要求2所述的集成电路,其特征在于:所述事件驱动传感器(10)属于以下类型中的一种或多种的组合:声音/震动传感器、动态视觉传感器。
- 如权利要求2所述的集成电路,其特征在于:事件驱动处理器(30)被配置有脉冲神经网络。
- 如权利要求2所述的集成电路,其特征在于:事件驱动处理器(30)被配置有脉冲卷积神经网络。
- 如权利要求3或4所述的集成电路,其特征在于:所述的第一裸晶和所述的第二裸晶采用不同的工艺制造。
- 如权利要求2至10任意一项所述的集成电路,其特征在于:所述的事件驱动接口系统(20)包括至少一个接口模块(200),所述的接口模块(200)构成可编程菊花链形式,异步处理从所述事件驱动传感器(10)接收到的事件(100)。
- 如权利要求12所述的集成电路,其特征在于:所述至少一个接口模块(200)包括复制模块(201),其被配置为:接收事件(100)并且执行复制操作得到复制事件(100c),所述的事件(100)来自所述事件驱动传感器(10)或来自所述事件驱动接口系统(20)的其它接口模块(200),并且发送所述复制事件(100c)至外部处理管道,以及沿着所述的菊花链发送所述事件(100)。
- 如权利要求12所述的集成电路,其特征在于:所述至少一个接口模块(200)包括融合模块(202),其被配置为:至少从两处不同的地方接收事件(100,100e),其中所述的事件(100)来自所述事件驱 动接口系统(20)的其它接口模块(200)或所述的事件驱动传感器(10);所述的事件(100e)还来自所述集成电路或其它集成电路的部件/模块或其它事件驱动传感器,并沿着所述可编程菊花链发送所述接收到的事件(100,100e)的部分或全部至后续的接口模块(200)。
- 如权利要求13所述的集成电路,其特征在于:所述集成电路的其它接口模块(200)是融合模块(202)。
- 如权利要求14所述的集成电路,其特征在于:所述集成电路的其它接口模块(200)是复制模块(201)。
- 如权利要求12所述的集成电路,其特征在于:所述至少一个接口模块(200)包括二次采样模块(203),其被配置为:为接收到的若干事件(100)分派成单一的地址。
- 如权利要求17所述的集成电路,其特征在于:所述二次采样模块(203)包括的分离模块(203-4)被配置为:根据接收事件(100)的地址值路由所述事件(100)至关联的所述二次采样模块(203)中的缩放寄存器;所述缩放寄存器被配置为:分割、二次采样、池化或/和移位接收到的地址值,并输出地址值至所述二次采样模块(203)中的地址重组模块(203-5),所述地址重组模块(203-5)被配置为:根据缩放后的地址值来调整事件地址,然后沿着所述可编程菊花链发送调整地址后的事件(100)。
- 如权利要求12所述的集成电路,其特征在于:所述至少一个接口模块(200)包括兴趣区域模块(204),其被配置为:调整至少一个事件地址的属性,所述的调整方式包括如下方式的一种或多种:移位、翻转、调换或/和旋转至少一个事件地址的属性;或/和抛弃地址属性值在可编程的地址属性值范围之外的事件(100),沿着所述可编程菊花链发送未被抛弃的事件(100)。
- 如权利要求12所述的集成电路,其特征在于:所述至少一个接口模块(200)包括事件路由模块(205),其被配置为:接收事件(100),为接收到的事件(100)添加头信息,并连同所述事件(100)的所述头信息发送所述事件(100)至所述事件驱动处理器(30)或/和其它事件驱动处理器或其它处理管道。
- 如权利要求12所述的集成电路,其特征在于:所述至少一个接口模块(200)包括速率控制模块,其被配置为:当超过最大速度后,仅沿着所述可编程菊花链发送部分所述事件(100),以限制事件的速率不超过最大速率。
- 如权利要求12所述的集成电路,其特征在于:所述至少一个接口模块(200)包括映射模块(206),其被配置为:将一个事件地址映射成另外一个事件地址。
- 如权利要求22所述的集成电路,其特征在于:所述映射模块(206)包括如下内容的之一或组合:兴趣区域模块、查找表模块、翻转或/和旋转模块;其中翻转或/和旋转模块被配置为翻转或/和旋转所述事件(100)的事件地址。
- 如权利要求12所述的集成电路,其特征在于:所述至少一个接口模块(200)包括事件地址重写模块(207),其被配置为:为接收到的事件地址转换为统一的地址格式,由此在所述可编程菊花链上传递统一的事件地址格式。
- 如权利要求12所述的集成电路,其特征在于:所述至少一个接口模块(200)包括事件地址过滤模块(208),其被配置为:过滤掉一系列具有特定挑选过的事件地址的事件(100)。
- 如权利要求25所述的集成电路,其特征在于:所述事件地址过滤模块(208)具体为热像素过滤模块(208’),其被配置为:过滤具有特定事件地址的事件(100),且通过CAM存储器(208’-3)存储预设的待过滤的事件地址列表。
- 如权利要求12所述的集成电路,其特征在于:所述的事件驱动接口系统(20)的任意一个或多个接口模块(200)可以被可编程开关旁路。
- 一种事件驱动接口系统(20),其被耦接于事件驱动传感器(10)和事件驱动处理器(30)之中,构成集成电路,所述的事件驱动传感器(10)生成和异步输出事件(100),所述事件(100)包括或被关联指示产生事件的所述的事件驱动传感器(10)上的输入设备(11)的事件地址;其特征在于:所述的事件驱动接口系统(20)包括至少一个接口模块(200),所述的接口模块(200)构成可编程菊花链形式,异步处理从所述传感器(10)接收到的事件(100)。
- 如权利要求28所述的事件驱动接口系统(20),其特征在于:所述至少一个接口模块(200)包括以下的一个或多个:复制模块(201)、融合模块(202)、二次采样模块(203)、兴趣区域模块(204)和事件路由模块(205);其中:所述复制模块(201),其被配置为:接收事件(100)并且执行复制操作得到复制事件(100c),所述的事件(100)来自所述事件驱动传感器(10)或来自所述事件驱动接口系统(20)的其它接口模块(200),并且发送所述复制事件(100c)至外部处理管道,以及沿着所述的菊花链发送所述事件(100);所述融合模块(202),其被配置为:至少从两处不同的地方接收事件(100,100e),其中所述的事件(100)来自所述事件驱动接口系统(20)的其它接口模块(200)或所述的事件驱动传感器(10);所述的事件(100e)还来自所述集成电路或其它集成电路的部件/模块或其它事件驱动传感器,并沿着所述可编程菊花链发送所述接收到的事件(100,100e)的部分或全部至后续的接口模块(200);所述二次采样模块(203),其被配置为:为接收到的若干事件(100)分派成单一的地址;所述兴趣区域模块(204),其被配置为:调整至少一个事件地址的属性,所述的调整方式包括如下方式的一种或多种:移位、翻转、调换或/和旋转至少一个事件地址的属性;或/和抛弃地址属性值在可编程的地址属性值范围之外的事件(100),沿着所述可编程菊花链发送未被抛弃的事件(100);所述事件路由模块(205),其被配置为:接收事件(100),为接收到的事件(100)添加头信息,并连同所述事件(100)的所述头信息发送所述事件(100)至所述事件驱动处理器(30)或/和其它事件驱动处理器或其它处理管道。
- 如权利要求29所述的事件驱动接口系统(20),其特征在于:沿着所述可编程菊花链的事件传递方向,所述至少一个接口模块(200)具 有如下接口模块耦接顺序:复制模块(201)、融合模块(202)、二次采样模块(203)、兴趣区域模块(204)和事件路由模块(205);或融合模块(202)、复制模块(201)、二次采样模块(203)、兴趣区域模块(204)和事件路由模块(205)。
- 如权利要求30所述的事件驱动接口系统(20),其特征在于:对于所述复制模块(201),所述的事件(100)来自所述事件驱动接口系统(20)的其它接口模块(200)具体是所述融合模块(202);或/和对于所述融合模块(202),所述的事件(100)来自所述事件驱动接口系统(20)的其它接口模块(200)具体是所述复制模块(201)。
- 如权利要求30所述的事件驱动接口系统(20),其特征在于:所述接口模块耦接顺序的上游还包括:事件地址重写模块(207)或/和事件地址过滤模块(208);其中的事件地址重写模块(207)被配置为:为接收到的事件地址转换为统一的地址格式,由此在所述可编程菊花链上传递统一的事件地址格式;其中的事件地址过滤模块(208)被配置为:过滤掉一系列具有特定挑选过的事件地址的事件(100)。
- 如权利要求32所述的事件驱动接口系统(20),其特征在于:所述事件(100)先经过事件地址重写模块(207)的处理,然后经过事件地址过滤模块(208)的处理。
- 如权利要求32所述的事件驱动接口系统(20),其特征在于:所述事件地址过滤模块(208)具体为热像素过滤模块(208’),其被配置为:过滤具有特定事件地址的事件(100),且通过CAM存储器(208’-3)存储预设的待过滤的事件地址列表。
- 如权利要求29所述的事件驱动接口系统(20),其特征在于:所述至少一个接口模块(200)还包括映射模块(206),所述映射模块(206)包括如下内容的之一或组合:兴趣区域模块、查找表模块、翻转或/和旋转模块;其中翻转或/和旋转模块被配置为翻转或/和旋转所述事件(100)的事件地址。
- 如权利要求29所述的事件驱动接口系统(20),其特征在于:所述至少一个接口模块(200)还包括速率控制模块,其被配置为:当超过 最大速度后,仅沿着所述可编程菊花链发送部分所述事件(100),以限制事件的速率不超过最大速率。
- 如权利要求28-36任意一项所述的事件驱动接口系统(20),其特征在于:所述的事件驱动接口系统(20)的任意一个或多个接口模块(200)可以被可编程开关旁路。
- 如权利要求28-36任意一项所述的事件驱动接口系统(20),其特征在于:所述事件驱动传感器(10)和所述事件驱动接口系统(20)以及所述事件驱动处理器(30)之间:通过转接板(40)而耦接于单芯片(3);或被制造在同一个裸晶中。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020237028111A KR20230134548A (ko) | 2021-04-19 | 2021-04-19 | 인터페이스 시스템을 갖는 이벤트 구동 집적 회로 |
PCT/CN2021/088143 WO2022221994A1 (zh) | 2021-04-19 | 2021-04-19 | 具有接口系统的事件驱动集成电路 |
CN202180004244.5A CN115500090A (zh) | 2021-04-19 | 2021-04-19 | 具有接口系统的事件驱动集成电路 |
JP2023552014A JP2024507400A (ja) | 2021-04-19 | 2021-04-19 | インターフェースシステムを備えたイベント駆動型集積回路 |
US18/010,486 US20240107187A1 (en) | 2021-04-19 | 2021-04-19 | Event-driven integrated circuit having interface system |
EP21937245.5A EP4207761A4 (en) | 2021-04-19 | 2021-04-19 | EVENT-DRIVEN INTEGRATED CIRCUIT WITH INTERFACE SYSTEM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/088143 WO2022221994A1 (zh) | 2021-04-19 | 2021-04-19 | 具有接口系统的事件驱动集成电路 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022221994A1 true WO2022221994A1 (zh) | 2022-10-27 |
Family
ID=83723647
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/088143 WO2022221994A1 (zh) | 2021-04-19 | 2021-04-19 | 具有接口系统的事件驱动集成电路 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20240107187A1 (zh) |
EP (1) | EP4207761A4 (zh) |
JP (1) | JP2024507400A (zh) |
KR (1) | KR20230134548A (zh) |
CN (1) | CN115500090A (zh) |
WO (1) | WO2022221994A1 (zh) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107302695A (zh) * | 2017-05-31 | 2017-10-27 | 天津大学 | 一种基于仿生视觉机理的电子复眼系统 |
EP1958433B1 (en) | 2005-06-03 | 2018-06-27 | Universität Zürich | Photoarray for detecting time-dependent image data |
WO2020116416A1 (ja) * | 2018-12-05 | 2020-06-11 | 株式会社ソニー・インタラクティブエンタテインメント | 信号処理装置、電子機器、信号処理方法およびプログラム |
WO2020207982A1 (en) | 2019-04-09 | 2020-10-15 | Aictx Ag | Event-driven spiking convolutional neural network |
CN112534816A (zh) * | 2018-08-14 | 2021-03-19 | 华为技术有限公司 | 用于视频图像编码的编码参数的基于事件自适应 |
CN112597980A (zh) * | 2021-03-04 | 2021-04-02 | 之江实验室 | 一种面向动态视觉传感器的类脑手势序列识别方法 |
CN112598700A (zh) * | 2019-10-02 | 2021-04-02 | 传感器无限公司 | 用于目标检测和追踪的神经形态视觉与帧速率成像 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108574793B (zh) * | 2017-03-08 | 2022-05-10 | 三星电子株式会社 | 被配置为重新生成时间戳的图像处理设备及包括其在内的电子设备 |
EP3506622A1 (en) * | 2017-12-26 | 2019-07-03 | Prophesee | Method for outputting a signal from an event based sensor, and event-based sensor using such method |
US20200169681A1 (en) * | 2018-11-26 | 2020-05-28 | Bae Systems Information And Electronic Systems Integration Inc. | Ctia based pixel for simultaneous synchronous frame-based & asynchronous event-driven readouts |
KR20210000985A (ko) * | 2019-06-26 | 2021-01-06 | 삼성전자주식회사 | 비전 센서, 이를 포함하는 이미지 처리 장치 및 비전 센서의 동작 방법 |
CN111190647B (zh) * | 2019-12-25 | 2021-08-06 | 杭州微纳核芯电子科技有限公司 | 一种事件驱动型常开唤醒芯片 |
CN111031266B (zh) * | 2019-12-31 | 2021-11-23 | 中国人民解放军国防科技大学 | 基于哈希函数的动态视觉传感器背景活动噪声过滤方法、系统及介质 |
-
2021
- 2021-04-19 JP JP2023552014A patent/JP2024507400A/ja active Pending
- 2021-04-19 EP EP21937245.5A patent/EP4207761A4/en active Pending
- 2021-04-19 US US18/010,486 patent/US20240107187A1/en active Pending
- 2021-04-19 KR KR1020237028111A patent/KR20230134548A/ko active Search and Examination
- 2021-04-19 CN CN202180004244.5A patent/CN115500090A/zh active Pending
- 2021-04-19 WO PCT/CN2021/088143 patent/WO2022221994A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1958433B1 (en) | 2005-06-03 | 2018-06-27 | Universität Zürich | Photoarray for detecting time-dependent image data |
CN107302695A (zh) * | 2017-05-31 | 2017-10-27 | 天津大学 | 一种基于仿生视觉机理的电子复眼系统 |
CN112534816A (zh) * | 2018-08-14 | 2021-03-19 | 华为技术有限公司 | 用于视频图像编码的编码参数的基于事件自适应 |
WO2020116416A1 (ja) * | 2018-12-05 | 2020-06-11 | 株式会社ソニー・インタラクティブエンタテインメント | 信号処理装置、電子機器、信号処理方法およびプログラム |
WO2020207982A1 (en) | 2019-04-09 | 2020-10-15 | Aictx Ag | Event-driven spiking convolutional neural network |
CN112598700A (zh) * | 2019-10-02 | 2021-04-02 | 传感器无限公司 | 用于目标检测和追踪的神经形态视觉与帧速率成像 |
CN112597980A (zh) * | 2021-03-04 | 2021-04-02 | 之江实验室 | 一种面向动态视觉传感器的类脑手势序列识别方法 |
Non-Patent Citations (4)
Title |
---|
AMON AMIRBRIAN TABA ET AL.: "A Low Power, Fully Event-Based Gesture Recognition System", 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR, 21 July 2017 (2017-07-21) |
PAUL A. MEROLLAJOHN V. ARTHUR ET AL.: "A million spiking-neuron integrated circuit with a scalable communication network and interface", SCIENCE, vol. 345, 8 August 2014 (2014-08-08) |
See also references of EP4207761A4 |
ZHE ZOURONG ZHAO ET AL.: "A hybrid and scalable brain-inspired robotic platform", SCIENTIFIC REPORTS, 23 October 2020 (2020-10-23) |
Also Published As
Publication number | Publication date |
---|---|
US20240107187A1 (en) | 2024-03-28 |
KR20230134548A (ko) | 2023-09-21 |
CN115500090A (zh) | 2022-12-20 |
EP4207761A4 (en) | 2024-06-19 |
JP2024507400A (ja) | 2024-02-19 |
EP4207761A1 (en) | 2023-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11677662B2 (en) | FPGA-efficient directional two-dimensional router | |
TWI746878B (zh) | 高頻寬記憶體系統以及邏輯裸片 | |
EP3298740B1 (en) | Directional two-dimensional router and interconnection network for field programmable gate arrays | |
US7155554B2 (en) | Methods and apparatuses for generating a single request for block transactions over a communication fabric | |
EP3557488A1 (en) | Neuromorphic circuit having 3d stacked structure and semiconductor device having the same | |
WO2017173755A1 (zh) | 片上数据划分读写方法、系统及其装置 | |
Yuan et al. | 14.2 A 65nm 24.7 µJ/Frame 12.3 mW Activation-Similarity-Aware Convolutional Neural Network Video Processor Using Hybrid Precision, Inter-Frame Data Reuse and Mixed-Bit-Width Difference-Frame Data Codec | |
TW201040962A (en) | Configurable bandwidth memory devices and methods | |
US7277975B2 (en) | Methods and apparatuses for decoupling a request from one or more solicited responses | |
US20210232902A1 (en) | Data Flow Architecture for Processing with Memory Computation Modules | |
CN112805727A (zh) | 分布式处理用人工神经网络运算加速装置、利用其的人工神经网络加速系统、及该人工神经网络的加速方法 | |
US20220308935A1 (en) | Interconnect-based resource allocation for reconfigurable processors | |
CN108256643A (zh) | 一种基于hmc的神经网络运算装置和方法 | |
WO2022221994A1 (zh) | 具有接口系统的事件驱动集成电路 | |
WO2020087276A1 (zh) | 大数据运算加速系统和芯片 | |
CN101562544B (zh) | 一种数据包生成器和数据包生成方法 | |
CN116246963A (zh) | 一种可重构3d芯片及其集成方法 | |
KR20200040165A (ko) | 분산처리용 인공신경망 연산 가속화 장치, 이를 이용한 인공신경망 가속화 시스템, 및 그 인공신경망의 가속화 방법 | |
US20220222194A1 (en) | On-package accelerator complex (ac) for integrating accelerator and ios for scalable ran and edge cloud solution | |
US11436185B2 (en) | System and method for transaction broadcast in a network on chip | |
US20230017778A1 (en) | Efficient communication between processing elements of a processor for implementing convolution neural networks | |
WO2022088171A1 (en) | Neural processing unit synchronization systems and methods | |
Lee et al. | Mini Pool: Pooling hardware architecture using minimized local memory for CNN accelerators | |
US11349782B2 (en) | Stream processing interface structure, electronic device and electronic apparatus | |
US10353455B2 (en) | Power management in multi-channel 3D stacked DRAM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21937245 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18010486 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2021937245 Country of ref document: EP Effective date: 20230331 |
|
ENP | Entry into the national phase |
Ref document number: 20237028111 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020237028111 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023552014 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |