CN113516676A - Angular point detection method, impulse neural network processor, chip and electronic product - Google Patents
Angular point detection method, impulse neural network processor, chip and electronic product Download PDFInfo
- Publication number
- CN113516676A CN113516676A CN202111075593.4A CN202111075593A CN113516676A CN 113516676 A CN113516676 A CN 113516676A CN 202111075593 A CN202111075593 A CN 202111075593A CN 113516676 A CN113516676 A CN 113516676A
- Authority
- CN
- China
- Prior art keywords
- convolution kernel
- type
- neural network
- neurons
- detection method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 69
- 238000001514 detection method Methods 0.000 title claims abstract description 37
- 210000002569 neuron Anatomy 0.000 claims abstract description 124
- 230000005284 excitation Effects 0.000 claims abstract description 71
- 230000001629 suppression Effects 0.000 claims abstract description 19
- 230000004044 response Effects 0.000 claims abstract description 11
- 238000012421 spiking Methods 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 15
- 230000000007 visual effect Effects 0.000 claims description 6
- 230000005764 inhibitory process Effects 0.000 claims description 3
- 238000000034 method Methods 0.000 abstract description 22
- 238000004364 calculation method Methods 0.000 abstract description 9
- 230000000694 effects Effects 0.000 abstract description 3
- 230000007547 defect Effects 0.000 abstract 1
- 230000000946 synaptic effect Effects 0.000 description 15
- 230000033001 locomotion Effects 0.000 description 12
- 230000008569 process Effects 0.000 description 12
- 239000012528 membrane Substances 0.000 description 11
- 239000000047 product Substances 0.000 description 10
- 210000000225 synapse Anatomy 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000004519 manufacturing process Methods 0.000 description 6
- 238000013461 design Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 230000001537 neural effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000026058 directional locomotion Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 210000002856 peripheral neuron Anatomy 0.000 description 2
- 230000001242 postsynaptic effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000010408 sweeping Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 210000004205 output neuron Anatomy 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000004936 stimulating effect Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an angular point detection method, a pulse neural network processor, a chip and an electronic product. The corner point detection method is applied to a chip, wherein the chip is provided with an impulse neural network, the impulse neural network is provided with a hidden layer, and the hidden layer comprises a plurality of neurons; receiving a pulse event from a dynamic vision sensor output; weighting the pulse events using a number of convolution kernels, and then projecting to neurons in a hidden layer; the convolution kernels comprise an oblique convolution kernel and a longitudinal and transverse convolution kernel, and each convolution kernel comprises a forward excitation area, a non-response excitation area and a suppression area. Aiming at the defect that a large amount of storage resources and calculation resources are occupied in the prior art, after data enters a chip, the data is directly transmitted forward according to a configured network, the corner position can be obtained without caching past data information or performing a large amount of calculation based on historical information, and the method has the technical effects of low storage resources and low calculation resource consumption.
Description
Technical Field
The invention relates to a corner point detection method, a pulse neural network processor, a chip and an electronic product, in particular to a method for detecting a corner point of an object in the pulse neural network processor through an event acquired by a dynamic visual camera, the pulse neural network processor, the chip and the electronic product.
Background
The original purpose of Simultaneous Localization And Mapping (SLAM) is: the robot (such as a sweeping robot) is expected to start from an unknown place of an unknown environment, the position and the posture of the robot are positioned through repeatedly observed map features (such as wall corners, columns and the like) in the moving process, and then the map is constructed in an incremental mode according to the position of the robot, so that the purpose of simultaneously positioning and constructing the map is achieved.
The traditional solution is to use a camera to capture and map features, but because the captured video is transmitted and processed frame by frame, there is a large amount of redundant information between adjacent images, which means a large amount of data transmission and bandwidth requirements, computationally intensive processors or/and servers are required.
At present, there is a new type of dynamic Vision sensor dvs (dynamic Vision sensor) that does not need to construct a frame (frame), which only detects the light and shade change at the photosensitive position of the pixel, and if there is no change, no event will be generated, so in comparison, the data is extremely sparse.
Whereas DVS-based schemes, we want to process less information data, where an important effort is to track points of interest. Based on these extracted points of interest, the SLAM algorithm, for example, tracks these points of interest and then calculates the motion trajectory. The scheme can eliminate a large amount of redundant data and reduce a large amount of calculation, thereby being more economical and energy-saving.
The corner points (corner) of the object are very robust interest points. Since an object (e.g., a book) moves in front of the DVS, regardless of its moving direction, corner points can be displayed although edge (edge) information displayed in the DVS is different. The corner points are always visible no matter how the object is moving, and are therefore good points of interest.
Prior art 1: vasco V, Glover A, Bartolozzi C. Fast event-based Harris core detection amplification of the adaptive of event-driven cameras [ C ]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 4144-.
Prior art 2: mueggler E, Bartolozzi C, Scaramuzza D. Fast event-based corner detection [ J ] 2017.
Prior art 3: manderscheid J, Sironi A, Bourdis N, et al. Speed in variable time surface for learning to detect core points with event-based cameras [ C ]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern recognition, 2019: 10245-.
The prior art 1 is based on the conventional corner detection algorithm, that is, the difference between the gradients of the corner and the edge pixels is utilized to convert into an asynchronous algorithm based on the DVS form. A common feature of this algorithm with others is that a certain number of recent events must be recorded. For example, in the prior art 2, the corner point is obtained by recording a certain number of recent events and then performing some condition judgments based on the events; prior art 3 is also based on a certain number of events recorded and based on this uses a method of machine learning to determine the corner points. Furthermore, these solutions only operate on a general purpose computing device (e.g., a personal computer, a server, etc.) in an analog manner.
Through analysis, the inventor finds that: the prior art needs to store the detected past information to detect the corner, and since these more data are processed by complicated logic operations, the computing resources are also consumed greatly. Memory resources, however, are at a premium for a spiking neural network processor. The core competitiveness of the impulse neural network processor is that the impulse neural network processor has low power consumption, and a large amount of storage resources and calculation resources are occupied, so that the design area, the power consumption and the real-time performance of a chip are greatly reduced. More storage resources and computing resources are consumed, which is unacceptable for neural mimicry computing (neurosyphogic computing), and it is desirable that the neural mimicry chip can process the input data in real time without buffering the data.
Aiming at the problems, the invention provides a low-power-consumption efficient corner detection scheme with low storage resource consumption and low calculation resource consumption.
Disclosure of Invention
In order to solve the technical problems, the invention provides a low-power consumption and high-efficiency corner detection scheme with low storage resource consumption and low calculation resource consumption, which is realized by the following modes:
an angular point detection method is applied to a chip, wherein the chip is provided with an impulse neural network, the impulse neural network is provided with a hidden layer, and the hidden layer comprises a plurality of neurons; receiving a pulse event from a dynamic vision sensor output; weighting the pulse events using a number of convolution kernels, and then projecting to neurons in a hidden layer; the convolution kernels comprise a first type convolution kernel or/and a second type convolution kernel or/and a third type convolution kernel, wherein the first type convolution kernel is configured to detect a corner of a horizontally moving object; the second type convolution kernel is configured to detect corners of a vertically moving object; the third type of convolution kernel is configured to detect corners of a diagonally-oriented object, where horizontal, vertical, and diagonal directions are directions defined by the dynamic vision sensor as a reference.
In some class of embodiments, the number of convolution kernels includes weight data configured to weight the impulse events projected to neurons in the hidden layer.
In some class of embodiments, the pulse event output by the dynamic vision sensor is: a pulse event directly output by the dynamic vision sensor; or, receiving impulse events output by the input layer neurons of impulse events directly output by the dynamic vision sensor.
In some embodiments, the spiking neural network further comprises an output layer, the output layer comprising a number of output layer neurons; for pulse events in the same receptive field, the pulse events are weighted by the convolution kernels and are respectively projected to a plurality of neurons corresponding to the convolution kernels in the hidden layer, and the pulse events output by the neurons corresponding to the convolution kernels in the hidden layer are projected to the neurons corresponding to the receptive field in the output layer; wherein the receptive field is the coverage in dynamic visual sensors or input layer neurons depending on the size of the convolution kernel.
In some embodiments, the corner position is determined according to address information of the neuron in the output layer corresponding to the receptive field.
In some class of embodiments, the third type of convolution kernel comprises: convolution kernels configured to detect corners of the object moving in an upper-right-to-lower-left direction, an upper-left-to-lower-right direction, a lower-left-to-upper-right direction, and a lower-right-to-upper-left direction, respectively, and any one of the convolution kernels of the third type includes three regions: a forward excitation region, a non-responsive excitation region, and a suppression region.
In a certain class of embodiments, the forward excitation region of the third type convolution kernel is located in an incoming direction of a corner point of the third type convolution kernel detection object; said unresponsive excitation region of said third type convolution kernel is located in a central region of the third type convolution kernel; the suppression area of the third type convolution kernel is an area of the third type convolution kernel excluding the forward excitation area and the unresponsive excitation area.
In one class of embodiments, the first type of convolution kernel includes: convolution kernels configured to detect angular points of the moving object in left-to-right and right-to-left directions, respectively; the second type of convolution kernel includes: convolution kernels configured to detect angular points of the moving object in up-to-down directions and down-to-up directions, respectively; and any one of the first type convolution kernel or the second type convolution kernel includes three regions: a forward excitation region, a non-responsive excitation region, and a suppression region.
In a certain embodiment, the forward excitation region of the first type convolution kernel or the second type convolution kernel is located in an incoming direction of a corner point of an object detected by the first type convolution kernel or the second type convolution kernel; the unresponsive excitation region of the first or second type of convolution kernel is located in a central region of the first or second type of convolution kernel; the suppression area of the first type convolution kernel or the second type convolution kernel is an area of the first type convolution kernel or the second type convolution kernel except the forward excitation area and the unresponsive excitation area.
In some kind of embodiments, the weight value of the forward excitation region is a positive number; the weight value of the unresponsive excitation area is zero; the weight value of the inhibition zone is negative.
In one class of embodiments, the forward excitation region of the first or second type convolution kernel has a tendency to shrink along a central direction of the first or second type convolution kernel.
A spiking neural network processor comprising a hidden layer and an output layer, both comprising circuitry to simulate neurons, in which the corner detection method according to any of the preceding claims is applied.
A chip comprising a dynamic vision sensor and a spiking neural network processor as described above, wherein: the dynamic vision sensor is positioned in a first bare chip, the impulse neural network processor is positioned in a second bare chip, and the first bare chip and the second bare chip are coupled through an adapter plate.
An electronic product comprises a first interface module, a second interface module, a processing module and a response module; the electronic product further comprises a spiking neural network processor as described above; the impulse neural network processor is coupled with the processing module through the first interface module, and the processing module is coupled with the response module through the second interface module; the impulse neural network processor identifies an input environment signal and transmits an identification result to the processing module through the first interface module, and the processing module generates a control instruction according to the identification result and transmits the control instruction to the response module through the second interface module.
In one class of embodiments, the electronic product comprises the chip, and the dynamic vision sensor of the chip is configured to detect an environmental signal (e.g., a visual signal) and provide the input environmental signal to the neural network processor.
Compared with the prior art, the method has the following beneficial technical effects:
1. the scheme for detecting the corner points which can be actually operated in the neural mimicry chip is disclosed, and is not the conventional computer simulation;
2. after the data enters the chip, the data is directly transmitted forward according to the configured network, and the corner position can be obtained without caching the past data information and carrying out a large amount of calculation based on historical information;
3. because the information enters the chip and is processed immediately, the delay from the input information to the output corner information is extremely low;
4. the requirement of the scheme on hardware is extremely low, complex logic operation is not needed, and only a general information processing mechanism of a pulse neuron is utilized, so that the overall power consumption of the scheme is extremely low;
5. different manufacturing processes can be adopted for different components in the same chip, the chip area is reduced, and the image quality of the dynamic vision sensor is improved.
The technical solutions, technical features, and technical means disclosed above may not be completely the same as or consistent with those described in the following detailed description. The technical features and technical means disclosed in this section and the technical features and technical means disclosed in the subsequent detailed description are combined with each other reasonably, so that more technical solutions are disclosed, which are beneficial supplements to the detailed description. As such, some details in the drawings may not be explicitly described in the specification, but if a person skilled in the art can deduce the technical meaning of the details based on the description of other related words or drawings, the common technical knowledge in the art, and other prior arts (such as conference, journal articles, etc.), the technical solutions, technical features, and technical means not explicitly described in this section also belong to the technical contents disclosed in the present invention, and the same as the above descriptions can be used in combination to obtain corresponding new technical solutions. The technical scheme combined by all the technical features disclosed at any position of the invention is used for supporting the generalization of the technical scheme, the modification of the patent document and the disclosure of the technical scheme.
Drawings
FIG. 1 is a schematic diagram of a corner detection system;
FIG. 2 is a schematic diagram of a convolution kernel architecture for detecting corners of top-right to bottom-left directional motion;
FIG. 3 is a schematic diagram of a convolution kernel architecture for detecting corners of top-left to bottom-right directional motion;
FIG. 4 is a schematic diagram of a convolution kernel structure for detecting corner points of motion in a top-to-bottom direction;
fig. 5 is a schematic flow chart of a scheme for implementing corner detection by several convolution kernels.
Detailed Description
Even though the embodiments described in this section are depicted in the same drawings and the same regions, the descriptions are not only for the specific embodiments, but also for the alternative descriptions of the potential embodiments with some technical features. The embodiments disclosed in this patent document are all reasonable combinations of certain features that may not be present in certain specific embodiments, provided such combinations are not logically inconsistent or meaningful.
In any place of the present invention, there may be cases where the same object is expressed but the terms are not identical. In general, this is caused by reasons such as distinction from a full name, insertion of a dummy word such as a help word, and the like. In the context of the present invention, both refer to the same object unless explicitly stated otherwise.
The drawings of the present invention are drawn for convenience in describing the technical solution to be disclosed, and thus it is difficult to objectively and comprehensively present the technical solution. Dimensions, proportions, quantities, etc. shown in the schematic drawings are likely not to be exact representations of actual products, and technical features not shown are unlikely to represent actual technical solutions without the technical features, and therefore, the details of the drawings should not constitute an undue limitation on the true intent of the present invention.
Neurons in a spiking neural network are a type of simulation of biological neurons. Inspired by biological neurons, some concepts related to biological neurons, such as synapses, membrane voltages, post-synaptic currents, post-synaptic potentials, etc., are also referred to using the same terminology when referring to neuron-related concepts in a spiking neural network, according to expressions that are custom defined in the art. In the brain-like chip, a circuit for simulating neurons and a circuit for simulating synapses are designed. That is, chips, etc., and these "biological" concepts in the hardware field are referred to as corresponding analog circuits according to common conventions in the field. Unless specifically indicated otherwise, references to concepts such as those similar to the biological layer described above in this disclosure refer to the corresponding concepts in the spiking neural network rather than the actual biological cell layer.
The meaning of the terms is to be interpreted: (1) and angular points: objects, such as computer screens, tables, books, etc., generally have geometric properties, and these objects usually have one or several corners, and these corners are formed by several edges, and we refer to the corners of such real world objects as angular points.
(2) And convolution kernel: in the field of neural networks, and in particular convolutional neural networks, the convolution kernel represents a set of weighted data, usually in the form of a matrix, that weights/filters the data to be convolved.
(3) And receptive field: corresponding to the size of the convolution kernel, when convolving a pulse event to be convolved (generally referred to as a feature map) output by a DVS or input layer neuron, each weight data in the convolution kernel corresponds to one event to be convolved, so that when a certain convolution is performed, the range covered by the convolution kernel and formed by the DVS or input layer neuron is referred to as a receptive field corresponding to the certain convolution.
(4) Spiking Neural Networks (SNNs): is known as a third-generation neural network, the simulated neuron of the third-generation neural network is closer to the reality than the traditional neuron, and the neuron is not activated in each iteration (but in a typical multilayer perceptron network), but is activated when the membrane potential of the neuron reaches a certain threshold value. When a neuron is activated, it generates a signal that is transmitted to other neurons to raise or lower its membrane potential.
(5) Input layer, hidden layer, output layer: the impulse neural network comprises a plurality of neurons, the neurons can be divided into different sets, each set can be called as a layer, the neuron set for receiving external input signals in the whole network is called an input layer, the neuron set for outputting signals outwards in the whole network is called an output layer, other neurons are called as hidden layers, and the hidden layers can be continuously subdivided into different hidden layers.
(6) Impulse neural network processors (also called neuromorphic chips, brain-like chips): the von Neumann computing architecture is a special chip, is different from the traditional von Neumann computing architecture, and is characterized in that a pulse neural network is configured in the chip, and the pulse neural network comprises a large number of neurons and synapses which are constructed by circuits. Such chips include, for example, IBM's trueNorth chip, Intel's Loihi chip, SynSense's DYNAP-CNN chip, Qinghua university's Tianjic chip, etc.
(7) DVS (Dynamic Vision Sensor): a novel sensor is different from the conventional sensor which needs to capture all pixels in a visual field and then outputs a picture in a frame mode, DVS only captures changed pixel points in the visual field and only transmits the changed pixel point information to the next level, so that the data is sparse, and the data volume is less.
(8) Forward excitation region, no response excitation region, suppression region: the regions are located in the convolution kernel, distributed at different positions of the convolution kernel and belong to one part of the convolution kernel, each region comprises a plurality of weight data, and the weight data in the same region have the same positive and negative. The weight data in the forward excitation region is positive, weighted by the weight data (corresponding to a synapse), and the delivered impulse event advantageously stimulates a rise in membrane voltage of a neuron in a next layer of the synaptic connection, thereby stimulating the neuron in the next layer to fire an impulse event. The weight data of the unresponsive excitation area is 0, and the transmitted pulse event can not stimulate the neuron at the next layer connected to the neuron to emit the pulse event through the weighting of the area weight data. The weight data in the inhibition zone is negative, which inhibits the membrane voltage rise of the connected next layer of neurons, preventing the next layer of neurons from issuing a pulsing event.
Referring to fig. 1, an object 101 is an object to be detected, which may be an object such as a pillar, a corner, a cabinet, or the like. The impulse neural network processor 100 is configured with an impulse neural network. The impulse neural network processor 100 is disposed in an electronic device (not shown) and is used at least for performing corner detection of the object 101. Optionally, the corner point information output by the spiking neural network processor 100 is passed to other components of the electronic device, which perform the calculation or extraction of the motion trajectory based on at least the received corner point information. The object 101 may be moving relative to the electronic device, the electronic device may be moving relative to the object, or both, and the invention is not intended to be limited to any particular situation.
The spiking neural network processor 100 configured with a spiking neural network is also referred to as a neuromorphic chip, brain-like chip. The spiking neural network processor 100 includes a plurality of circuits for simulating neurons (referred to as neurons in the present invention) and circuits for simulating synapses (referred to as synapses in the present invention). The circuit for simulating the neuron and the circuit for simulating the synapse simulate the working mode of the neuron and the synapse in the brain, receive and transmit information and obtain an inference result. The spiking neural network processor 100 may be implemented as follows: synchronous digital circuits, synchronous and asynchronous hybrid circuits, pure asynchronous and realized by analog circuits. The present invention is not limited to the specific implementation of the spiking neural network processor 100. The design of the spiking neural network processor 100 can be found in:
prior art 4: WO2020/207982A1, published as follows: 10 and 15 days in 2020. This prior art is incorporated herein by reference in its entirety into the disclosure of the present invention.
The DVS 102 receives ambient signals from the surroundings and the light from the object 101 on the DVS 102 generates bright-dark pulse events due to the motion of the object 101 or the electronic device. The DVS 102 delivers the impulse events to the impulse neural network processor 100, and neurons in the input layer 103 in the impulse neural network processor 100 receive the impulse events and project them to the hidden layer 104 through synaptic connection weights.
In the impulse neural network configured by the impulse neural network processor 100, a plurality of layers, such as an input layer 103, a hidden layer 104, and an output layer 105, are included. The input layer, the hidden layer and the output layer all comprise a plurality of circuits for simulating neurons. The hidden layer 104 may have multiple hidden layers, as sometimes desired. In a certain embodiment, the size of the DVS 102 is 128 × 128, and neurons in the input layer 103 may have a one-to-one correspondence to each pixel point in the DVS 102.
In some embodiments, the input layer 103 is not of the same scale as the resolution of the DVS. For example, the DVS 102 is 128 × 128 in size and the input layer 103 is 64 × 64 in size, i.e., a portion of the pulse events of the DVS are discarded, which essentially down samples the data (pixels) in the DVS 102.
Preferably, the input layer 103 may also be omitted. In this case, the pulse events in the DVS 102 are input directly to the hidden layer 104 via a weighted connection. According to the scheme, a layer of network can be omitted, and more hardware resources are saved.
During the projection of the input layer 103 to the hidden layer 104 by the synaptic connection weights, the neurons in the hidden layer 104 cause the membrane voltages of the neurons in the hidden layer 104 to exceed the threshold value due to the received impulse events, and then the neurons send output impulse events to the output layer 105.
Wherein the input pulse events can be weighted convolved by several convolution kernels in the projection of the input layer 103 into the hidden layer 104. Preferably, 8 convolution kernels may be set to cope with corner detection in 8 directions. Of course, other numbers of convolution kernels may be provided to convolve the output of the input layer 103, such as a combination of several of the 8 convolution kernels, as desired.
Optionally, the convolution kernels in the 8 directions are respectively: (1) an oblique convolution kernel comprising: a top-right to bottom-left convolution kernel, a top-left to bottom-right convolution kernel, a bottom-left to top-right convolution kernel, a bottom-right to top-left convolution kernel; (2) a vertical and horizontal convolution kernel comprising: a top-to-bottom convolution kernel, a left-to-right convolution kernel, a right-to-left convolution kernel, and a bottom-to-top convolution kernel. The convolution kernels are used to detect the corner points of the object in a predetermined motion direction (i.e., the motion direction of the object relative to the DVS 102, including three cases of the object being stationary while the DVS 102 is moving, the DVS 102 being stationary while the object is moving, and the object and the DVS 102 both moving). the DVS 102 is used as a reference object in the present application to describe the motion direction of the object, but not to limit the true motion of the object or the DVS 102 relative to the ground reference frame (for example, the DVS 102 is interpreted as stationary and the object is interpreted as moving).
Specifically, a top-right to bottom-left convolution kernel configured to detect a corner point of an object moving from a top-right to bottom-left direction; a top-left to bottom-right convolution kernel configured to detect a corner point of an object moving from a top-left to bottom-right direction; a bottom-left to top-right convolution kernel configured to detect an angular point of an object moving from a bottom-left to top-right direction; and the lower right-to-upper left convolution kernel is configured to detect the corner point of the object moving from the lower right-to-upper left direction. A top-down convolution kernel configured to detect an angular point of an object moving in a top-down direction; a left-to-right convolution kernel configured to detect an angular point of an object moving in a left-to-right direction; a right-to-left convolution kernel configured to detect an angular point of an object moving in a right-to-left direction; and the lower-upper convolution kernel is configured to detect the angular point of the object moving from the lower-upper direction. I.e. the diagonal convolution kernel is configured to detect the corner points of diagonally moving objects and the cross convolution kernel is configured to detect the corner points of horizontally and vertically moving objects. The specific construction of these convolution kernels will be described in detail later in connection with fig. 2-4.
In other words, the above 8-direction convolution kernels can also be classified into three types: a first type of convolution kernel (i.e., one of the aforementioned vertical and horizontal convolution kernels), a second type of convolution kernel (i.e., another of the aforementioned vertical and horizontal convolution kernels), and a third type of convolution kernel (i.e., the aforementioned diagonal convolution kernel). Wherein the first type convolution kernel is configured to detect a corner point of an object moving in a horizontal direction; the second type convolution kernel is configured to detect corner points of the vertically moving object; the third type of convolution kernel is configured to detect corners of an object moving in a diagonal direction, where the horizontal direction, the vertical direction, and the diagonal direction are directions defined by the DVS 102 as a reference. The first type convolution kernel comprises the left-to-right convolution kernel and the right-to-left convolution kernel; the second type convolution kernel comprises the upper-to-lower convolution kernel and the lower-to-upper convolution kernel; the third type of convolution kernel comprises a top right to bottom left convolution kernel, a top left to bottom right convolution kernel, a bottom left to top right convolution kernel and a bottom right to top left convolution kernel.
For clearly illustrating the implementation of the motion direction corner detection, the contents and structures of these convolution kernels are described below. The invention is not limited to the particular parameters given by way of example for a particular application.
Referring to fig. 2, a schematic diagram of a top-right to bottom-left convolution kernel 200 is disclosed, which detects corner points of an object moving in a top-right to bottom-left direction. Divided into a plurality of regions (or zones) in the convolution kernel, including: a first region 201, a second region 202, a third region 203. Where the first region 201 is the forward excitation region, located at the upper right of the convolution kernel, occupying the top right and top right of the upper right to lower left convolution kernel. After the neuron in the input layer 103 detects an event, the corresponding synapse connection at the upper right of the convolution kernel stimulates the corresponding neuron in the hidden layer 104 to emit a pulse, which is then transmitted to the corresponding neuron in the output layer 105. The value of the forward excitation region is positive, and preferably all positive values are equal, such as 0.8 or 1.
The second region 202 of the top-right to bottom-left convolution kernel is the unresponsive excitation region, where synaptic weights are all 0. When an event is detected by a neuron in the input layer 103, the synaptic connection corresponding to the second region 202 of the upper-right to lower-left convolution kernel 200 does not contribute to the membrane voltage of the corresponding neuron in the hidden layer 104. Thus, the second region 202 is a non-responsive excitation region, located in the middle (or central) region of the top-right to bottom-left convolution kernel 200.
The third region 203 of the top-right to bottom-left convolution kernel 200 is a suppression region. This third region 203 occupies the left, bottom-side convolution kernel, and the lower right, top-side, left portion. Preferably, the first region 201 and the third region 203 each occupy half of the area on the right side and top of the top-right to bottom-left convolution kernel 200. When a neuron in the input layer 103 detects an event, the synaptic connection corresponding to the third region 203 of the upper-right to lower-left convolution kernel 200 inhibits the membrane voltage, specifically, decreases the membrane voltage, of the corresponding neuron in the hidden layer 104, so as to inhibit the corresponding neuron from generating a pulse event. In some embodiments, the third region 203 of the top-right-left-bottom convolution kernel 200 has negative values, for example, the third region 203 of the top-right-left-bottom convolution kernel 200 has values of-2.5 or-1.
Referring to fig. 3, there is shown a schematic diagram of a top-left-to-bottom-right convolution kernel 300 that detects corner points of an object moving in a top-left-to-bottom-right direction. The convolution kernel also includes three regions (or zones): a fourth region 301, a fifth region 302, and a sixth region 303. The fourth region 301 is also a forward excitation region, the fifth region 302 is a non-responsive excitation region, and the sixth region 303 is a suppression region. When a neuron in the input layer 103 detects an event, the synaptic connections corresponding to the fourth region 301 of the upper-left to lower-right convolution kernel 300 stimulate the corresponding neuron in the hidden layer 104 to emit a pulse, while the fifth region 302 does not contribute to the emission of the pulse by the corresponding neuron, and the sixth region 303 inhibits the emission of the pulse by the corresponding neuron.
In other words, the forward excitation region of the oblique convolution kernel is located in the incoming direction of the angular point of the object detected by the oblique convolution kernel; the unresponsive excitation region of the oblique convolution kernel is located in a central region of the oblique convolution kernel; the suppression area of the oblique convolution kernel is an area of the oblique convolution kernel excluding the forward excitation area and the unresponsive excitation area. Wherein, the forward excitation area is in an L shape (including horizontal, vertical or mirror image deformation).
Referring to fig. 4, a schematic diagram of a top-down convolution kernel 400 is shown. The upper-to-lower convolution kernel 400 also includes three regions: a seventh region 401, an eighth region 402, a ninth region 403. After the neuron in the input layer 103 detects an event, the synaptic connection corresponding to the seventh region 401 above the convolution kernel stimulates the corresponding neuron in the hidden layer 104 to emit a pulse, and transmits the pulse to the corresponding neuron in the output layer 105. When a neuron in the input layer 103 detects an event, the synaptic connections corresponding to the eighth region 402 of the upper-lower convolution kernel 400 do not contribute to the membrane voltage of the corresponding neuron in the hidden layer 104. When a neuron in the input layer 103 detects an event, the synaptic connections corresponding to the ninth region 403 of the upper-lower convolution kernel 400 will suppress the membrane voltage of the corresponding neuron in the hidden layer 104, thereby suppressing the generation of pulses.
Furthermore, the inventor finds that, in order to better detect the corner point movement of the object 101 in the top-to-bottom direction, the seventh region 401 and the ninth region 403 are connected, and the seventh region 401 gradually shrinks towards the center of the top-to-bottom convolution kernel 400. Such an arrangement would preferably advantageously reduce the probability of false-positives for detecting top-to-bottom corner point movement.
In other words, the forward excitation area of the vertical and horizontal convolution kernel is located in the incoming direction of the corner point of the object detected by the vertical and horizontal convolution kernel; the unresponsive excitation region of the vertical and horizontal convolution kernel is located in the central region of the vertical and horizontal convolution kernel; the suppression region of the vertical and horizontal convolution kernel is located in a region of the vertical and horizontal convolution kernel except the forward excitation region and the unresponsive excitation region.
Furthermore, for the bottom left to top right convolution kernel: the forward excitation area is positioned in the lower left direction of the convolution kernel detection object corner point, the unresponsive excitation area is positioned in the central area of the convolution kernel, and the suppression area is an area of the convolution kernel except the forward excitation area and the unresponsive excitation area. Bottom right to top left convolution kernel: the forward excitation area is positioned in the lower right direction of the convolution kernel detection object corner point, the unresponsive excitation area is positioned in the central area of the convolution kernel, and the suppression area is an area of the convolution kernel except the forward excitation area and the unresponsive excitation area. Left to right convolution kernel: the forward excitation area is positioned on the left of the convolution kernel in the coming direction of the corner point of the detection object, the unresponsive excitation area is positioned in the central area of the convolution kernel, and the suppression area is an area of the convolution kernel except the forward excitation area and the unresponsive excitation area. Right-to-left convolution kernel: the forward excitation area is positioned at the right side of the convolution kernel in the coming direction of the corner points of the detected object, the unresponsive excitation area is positioned in the central area of the convolution kernel, and the suppression area is an area of the convolution kernel except the forward excitation area and the unresponsive excitation area. Lower to upper convolution kernel: the forward excitation area is positioned below the detection object corner point of the convolution kernel in the coming direction, the unresponsive excitation area is positioned in the central area of the convolution kernel, and the suppression area is an area except the forward excitation area and the unresponsive excitation area of the convolution kernel.
It is noted that although the present application defines the orientations of up, down, left, right, left up, left down, right up, right down, etc., all of these orientations may be converted to different orientations depending on the selection of the reference object and the viewing angle. The DVS is used as a reference in the present application for convenience of description and uniform description, and this is not intended to limit the selection of a particular reference, view angle. Such orientations, once converted, are considered equivalent and are within the scope of the present application.
The above 8 convolution kernels are implemented as weights of synaptic connections between the input layer 103 to the hidden layer 104. In one embodiment, the configuration information such as the synaptic weights representing the convolution kernels may be deployed in a chip through a dedicated software tool. In this way, the DVS 102 senses the environmental signal, transmits the excited pulse event to the input layer 103, then transmits the obtained pulse sequence to the hidden layer 104 under the weighting of the above convolution kernels, and then adds the neuron output pulse events in the hidden layer 104 corresponding to 8 convolution kernels at the output layer 105 to obtain the output. The scale of the output neuron may also be 128 × 128, and if a certain neuron in the output layer 105 outputs a pulse event, it means that the neuron in the output layer 105 corresponding to the neuron and a pixel point corresponding to the DVS detect a corner point of the object 101.
Referring to fig. 5, it shows a schematic flow chart of the scheme of the present invention for implementing corner detection through the convolution operation of the above-mentioned convolution kernels. The input layer 103 (if the input layer 103 is omitted, the pulse event is directly received from the DVS 102, and the input layer 103 is included as an example in the following description) includes a number of neurons, for example, the number of the neurons in a certain embodiment is 64 × 64. The convolution kernels optionally include top-right to bottom-left convolution kernels 200, top-left to bottom-right convolution kernels 300, and so on, which are optionally 9 x 9 in size. The input layer 103 may divide a plurality of receptive fields, such as a first receptive field 501 and a second receptive field 502. Alternatively, the peripheral neurons can be filled with 0 around the input layer 103 by a common padding (padding) technique to make up for the fact that the peripheral neurons cannot form a complete receptive field of the same scale, which is common knowledge in artificial neural networks and will not be described herein.
In the first receptive field 501, it corresponds to a number of neurons equal to the size of the convolution kernel, the pulse events output by each neuron in the first receptive field 501 are multiplied by the synaptic weights corresponding to the convolution kernels 200 from top right to bottom left, respectively, and after the synaptic weights are weighted, the pulse events output by all neurons in the first receptive field 501 are projected to the first neuron 1041-1 in the first neuron set 1041 in the hidden layer 104.
The first receptive field 501 is then switched to a second receptive field 502 that includes the same number of neurons as the top-right to bottom-left convolution kernel 200, the neurons in the second receptive field 502 are similarly convolved by the top-right to bottom-left convolution kernel 200, and the weighted result is projected to a second neuron 1041-2 in the first set of neurons 1041.
The switching of the first field 501 to the second field 502 is performed in steps. As shown in the figure, it shows the case of step size 1, and the step size is not limited in the present invention. As described above, if the padding technique is passed and the step size is set to 1, then the number of first neuron set 1041 in hidden layer 104 is also 64 × 64 by convolution with the top-right to bottom-left convolution kernel 200.
Similarly, for the top-left-to-bottom-right convolution kernel 300, which also goes through the convolution process described above, the pulse events output in the neurons in the input layer 103 are projected through synaptic weights to the neurons in the second set of neurons 1042. Thus, through the convolution operation of the 8 convolution kernels, the pulse events are projected to 64 × 64 × 8 neurons in the hidden layer.
Then, for example, for a certain receptive field (e.g., the first receptive field 501), pulse events output by the corresponding 8 neurons (distributed in the first to eighth neuron sets 1041 to 1048, such as the first neuron 1041-1 in the first neuron set 1041, the first neuron 1042-1 in the second neuron set 1042, and the first neuron 1048-1 in the eighth neuron set 1048) in the hidden layer 104 through the aforementioned (e.g., 8) different convolution kernel weights are projected to the neuron 105-1 in the output layer 105. The size of the neurons in the output layer 105 is the same as the pixel size of the DVS (or the number of neurons in the input layer 103 corresponding to the DVS after downsampling), and the neurons in the output layer 105 outputting pulse events or their corresponding position (or address) information are regarded as the corner detection result.
Furthermore, the inventor found that the DVS and the spiking neural network processor 100 are integrated in the same chip, which can effectively eliminate signal loss and noise interference (the DVS and the spiking neural network processor 100 are basically coupled through USB in the prior art), and further pursue lower chip occupation area and manufacturing cost. More importantly, the DVS and the spiking neural network processor 100 may adopt different chip fabrication processes, for example, a CIS-CMOS process may be adopted for the DVS, and the spiking neural network processor 100 may adopt a conventional CMOS process, which not only reduces the cost and improves the DVS image quality, but also does not increase the chip area. Specifically, the DVS is located in a first die, the spiking neural network processor 100 is located in a second die, and the first die and the second die are electrically connected through an interface circuit, and the interface circuit is located in the first die or/and the second die, and the first die and the second die are coupled to a single chip (including the spiking neural network processor 100) through an interposer (interposer).
In addition, an electronic product equipped with the impulse neural network processor 100 or the aforementioned chip may perform other required functions, such as motion trajectory tracking or other functions, according to the result of the corner detection. In some embodiments, the electronic product is a sweeping robot. Specifically, the electronic product comprises a first interface module, a second interface module, a processing module and a response module, and further comprises the impulse neural network processor; the impulse neural network processor is coupled with the processing module through the first interface module, and the processing module is coupled with the response module through the second interface module; the impulse neural network processor identifies an input environment signal and transmits an identification result to the processing module through the first interface module, and the processing module generates a control instruction according to the identification result and transmits the control instruction to the response module through the second interface module. The impulse neural network processor and the dynamic vision sensor can also form the chip.
While the invention has been described with reference to specific features and embodiments thereof, various modifications and combinations may be made without departing from the invention. Accordingly, the specification and figures are to be regarded in a simplified manner as being illustrative of some embodiments of the invention defined by the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the invention. Thus, although the present invention and its advantages have been described in detail, various changes, substitutions and alterations can be made herein without departing from the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification.
As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
To achieve better technical results or for certain applications, a person skilled in the art may make further improvements on the technical solution based on the present invention. However, even if the partial modification/design is inventive or/and advanced, the technical solution should also fall within the protection scope of the present invention according to the "overall coverage principle" as long as the technical features covered by the claims of the present invention are utilized.
Several technical features mentioned in the attached claims may be replaced by alternative technical features or the order of some technical processes, the order of materials organization may be recombined. Those skilled in the art can easily understand the alternative means, or change the sequence of the technical process and the material organization sequence, and then adopt substantially the same means to solve substantially the same technical problems and achieve substantially the same technical effects, therefore, even if the means or/and the sequence are explicitly defined in the claims, the modifications, changes and substitutions shall fall into the protection scope of the claims according to the "equivalent principle".
Where a claim recites an explicit numerical limitation, one skilled in the art would understand that other reasonable numerical values around the stated numerical value would also apply to a particular embodiment. Such design solutions, which do not depart from the inventive concept by a departure from the details, also fall within the scope of protection of the claims.
The method steps and elements described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and the steps and elements of the embodiments have been described in functional generality in the foregoing description, for the purpose of clearly illustrating the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention as claimed.
Further, any module, component, or device executing instructions exemplified herein can include or otherwise have access to a non-transitory computer/processor readable storage medium or media for storing information, such as computer/processor readable instructions, data structures, program modules, and/or other data. Any such non-transitory computer/processor storage media may be part of or accessible or connectable to a device. Any application or module described herein may be implemented using computer/processor readable/executable instructions that may be stored or otherwise maintained by such non-transitory computer/processor readable storage media.
Meaning of reference numerals
100 pulse neural network processor
101 object
102 Dynamic Vision Sensor (DVS)
103 input layer
104 hidden layer
105 output layer
200 upper right to lower left convolution kernel
201 first region
202 second district city
203 third region
300 top left to bottom right convolution kernel
301 fourth area
302 fifth area
303 sixth region
400 up-down convolution kernel
401 seventh region
402 eighth region
403 ninth area
501 first receptive field
502 second receptive field
1041 first neuron set
1041-1A first neuron in a first neuron set 1041
1041-2 second neuron in first neuron set 1041
1042 second set of neurons
1042-1 second neuron set 1042 first neuron
1048 eighth neuron set
1048-1A first neuron in an eighth neuron set 1048
105-1 neurons in the output layer 105
Claims (14)
1. A corner point detection method is applied to a chip, and the chip is provided with an impulse neural network, and is characterized in that:
the impulse neural network is provided with a hidden layer, and the hidden layer comprises a plurality of neurons;
receiving a pulse event from a dynamic vision sensor output;
weighting the pulse events using a number of convolution kernels, and then projecting to neurons in a hidden layer;
the convolution kernels comprise a first type convolution kernel or/and a second type convolution kernel or/and a third type convolution kernel, wherein the first type convolution kernel is configured to detect a corner of a horizontally moving object;
the second type convolution kernel is configured to detect corners of a vertically moving object;
the third type of convolution kernel is configured to detect corners of a diagonally-oriented object, where horizontal, vertical, and diagonal directions are directions defined by the dynamic vision sensor as a reference.
2. The corner point detection method according to claim 1, characterized by:
the number of convolution kernels includes weight data configured to weight the impulse events projected to neurons in the hidden layer.
3. The corner point detection method according to claim 1, characterized by:
the pulse event output by the dynamic vision sensor is as follows:
a pulse event directly output by the dynamic vision sensor; or the like, or, alternatively,
impulse events output by input layer neurons receiving impulse events directly output by the dynamic vision sensor.
4. A corner point detection method according to claim 3, characterized by:
the pulse neural network also comprises an output layer, and the output layer comprises a plurality of output layer neurons;
for pulse events in the same receptive field, the pulse events are weighted by the convolution kernels and are respectively projected to a plurality of neurons corresponding to the convolution kernels in the hidden layer, and the pulse events output by the neurons corresponding to the convolution kernels in the hidden layer are projected to the neurons corresponding to the receptive field in the output layer; wherein the receptive field is the coverage in dynamic visual sensors or input layer neurons depending on the size of the convolution kernel.
5. The corner point detection method according to claim 4, characterized by:
and determining the corner position according to the address information of the neuron corresponding to the receptive field in the output layer.
6. The corner point detection method according to claim 1, characterized by:
the third type of convolution kernel includes: convolution kernels configured to detect corners of the object moving in an upper-right-to-lower-left direction, an upper-left-to-lower-right direction, a lower-left-to-upper-right direction, and a lower-right-to-upper-left direction, respectively, and any one of the convolution kernels of the third type includes three regions:
a forward excitation region, a non-responsive excitation region, and a suppression region.
7. The corner point detection method according to claim 6, characterized by:
the forward excitation area of the third type convolution kernel is positioned in the incoming direction of the corner point of the object detected by the third type convolution kernel;
said unresponsive excitation region of said third type convolution kernel is located in a central region of the third type convolution kernel;
the suppression area of the third type convolution kernel is an area of the third type convolution kernel excluding the forward excitation area and the unresponsive excitation area.
8. The corner point detection method according to claim 1, characterized by:
the first type of convolution kernel includes: convolution kernels configured to detect angular points of the moving object in left-to-right and right-to-left directions, respectively;
the second type of convolution kernel includes: convolution kernels configured to detect angular points of the moving object in up-to-down directions and down-to-up directions, respectively;
and any one of the first type convolution kernel or the second type convolution kernel includes three regions:
a forward excitation region, a non-responsive excitation region, and a suppression region.
9. The corner point detection method according to claim 8, characterized by:
the forward excitation area of the first type convolution kernel or the second type convolution kernel is positioned in the coming direction of the corner point of the object detected by the first type convolution kernel or the second type convolution kernel;
the unresponsive excitation region of the first or second type of convolution kernel is located in a central region of the first or second type of convolution kernel;
the suppression area of the first type convolution kernel or the second type convolution kernel is an area of the first type convolution kernel or the second type convolution kernel except the forward excitation area and the unresponsive excitation area.
10. The corner detection method according to any of claims 6-9, characterized by:
the weight value of the forward excitation area is a positive number; the weight value of the unresponsive excitation area is zero; the weight value of the inhibition zone is negative.
11. The corner point detection method according to claim 9, characterized by:
the forward excitation region of the first or second type convolution kernel has a tendency to shrink along a central direction of the first or second type convolution kernel.
12. A spiking neural network processor comprising a hidden layer and an output layer, both comprising circuitry for simulating neurons, characterized in that a corner detection method according to any of claims 1-11 is applied in the spiking neural network processor.
13. A chip comprising a dynamic vision sensor and the spiking neural network processor of claim 12, wherein: the dynamic vision sensor is located in a first die, the impulse neural network processor of claim 12 is located in a second die, and the first die and the second die are coupled by an interposer.
14. An electronic product comprising a first interface module, a second interface module, a processing module, and a response module, wherein: the electronic product further comprising the spiking neural network processor of claim 12; the impulse neural network processor is coupled with the processing module through the first interface module, and the processing module is coupled with the response module through the second interface module;
the impulse neural network processor identifies an input environment signal and transmits an identification result to the processing module through the first interface module, and the processing module generates a control instruction according to the identification result and transmits the control instruction to the response module through the second interface module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111075593.4A CN113516676B (en) | 2021-09-14 | 2021-09-14 | Angular point detection method, impulse neural network processor, chip and electronic product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111075593.4A CN113516676B (en) | 2021-09-14 | 2021-09-14 | Angular point detection method, impulse neural network processor, chip and electronic product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113516676A true CN113516676A (en) | 2021-10-19 |
CN113516676B CN113516676B (en) | 2021-12-28 |
Family
ID=78063192
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111075593.4A Active CN113516676B (en) | 2021-09-14 | 2021-09-14 | Angular point detection method, impulse neural network processor, chip and electronic product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113516676B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116030535A (en) * | 2023-03-24 | 2023-04-28 | 深圳时识科技有限公司 | Gesture recognition method and device, chip and electronic equipment |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130297542A1 (en) * | 2012-05-07 | 2013-11-07 | Filip Piekniewski | Sensory input processing apparatus in a spiking neural network |
US20130297539A1 (en) * | 2012-05-07 | 2013-11-07 | Filip Piekniewski | Spiking neural network object recognition apparatus and methods |
US20150178593A1 (en) * | 2013-12-24 | 2015-06-25 | Huawei Technologies Co., Ltd. | Method, apparatus, and device for detecting convex polygon image block |
US9193075B1 (en) * | 2010-08-26 | 2015-11-24 | Brain Corporation | Apparatus and methods for object detection via optical flow cancellation |
CN106097356A (en) * | 2016-06-15 | 2016-11-09 | 电子科技大学 | A kind of image angle point detecting method based on Spiking |
CN106407990A (en) * | 2016-09-10 | 2017-02-15 | 天津大学 | Bionic target identification system based on event driving |
CN109461173A (en) * | 2018-10-25 | 2019-03-12 | 天津师范大学 | A kind of Fast Corner Detection method for the processing of time-domain visual sensor signal |
CN109919957A (en) * | 2019-01-08 | 2019-06-21 | 同济大学 | A kind of angular-point detection method based on dynamic visual sensor |
CN109948725A (en) * | 2019-03-28 | 2019-06-28 | 清华大学 | Based on address-event representation neural network object detecting device |
CN110176028A (en) * | 2019-06-05 | 2019-08-27 | 中国人民解放军国防科技大学 | Asynchronous corner detection method based on event camera |
CN111310760A (en) * | 2020-02-13 | 2020-06-19 | 辽宁师范大学 | Method for detecting onychomycosis characters by combining local prior characteristics and depth convolution characteristics |
WO2020207982A1 (en) * | 2019-04-09 | 2020-10-15 | Aictx Ag | Event-driven spiking convolutional neural network |
CN112633497A (en) * | 2020-12-21 | 2021-04-09 | 中山大学 | Convolutional pulse neural network training method based on reweighted membrane voltage |
US20210174122A1 (en) * | 2020-12-17 | 2021-06-10 | Intel Corporation | Probabilistic sampling acceleration and corner feature extraction for vehicle systems |
CN112966814A (en) * | 2021-03-17 | 2021-06-15 | 上海新氦类脑智能科技有限公司 | Information processing method of fused impulse neural network and fused impulse neural network |
US20210190497A1 (en) * | 2018-07-09 | 2021-06-24 | Samsung Electronics Co., Ltd. | Simultaneous location and mapping (slam) using dual event cameras |
CN113066104A (en) * | 2021-03-25 | 2021-07-02 | 三星(中国)半导体有限公司 | Angular point detection method and angular point detection device |
CN113221855A (en) * | 2021-06-11 | 2021-08-06 | 中国人民解放军陆军炮兵防空兵学院 | Small target detection method and system based on scale sensitive loss and feature fusion |
CN113313240A (en) * | 2021-08-02 | 2021-08-27 | 成都时识科技有限公司 | Computing device and electronic device |
-
2021
- 2021-09-14 CN CN202111075593.4A patent/CN113516676B/en active Active
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9193075B1 (en) * | 2010-08-26 | 2015-11-24 | Brain Corporation | Apparatus and methods for object detection via optical flow cancellation |
US20130297539A1 (en) * | 2012-05-07 | 2013-11-07 | Filip Piekniewski | Spiking neural network object recognition apparatus and methods |
US20130297542A1 (en) * | 2012-05-07 | 2013-11-07 | Filip Piekniewski | Sensory input processing apparatus in a spiking neural network |
US20150178593A1 (en) * | 2013-12-24 | 2015-06-25 | Huawei Technologies Co., Ltd. | Method, apparatus, and device for detecting convex polygon image block |
CN106097356A (en) * | 2016-06-15 | 2016-11-09 | 电子科技大学 | A kind of image angle point detecting method based on Spiking |
CN106407990A (en) * | 2016-09-10 | 2017-02-15 | 天津大学 | Bionic target identification system based on event driving |
US20210190497A1 (en) * | 2018-07-09 | 2021-06-24 | Samsung Electronics Co., Ltd. | Simultaneous location and mapping (slam) using dual event cameras |
CN109461173A (en) * | 2018-10-25 | 2019-03-12 | 天津师范大学 | A kind of Fast Corner Detection method for the processing of time-domain visual sensor signal |
CN109919957A (en) * | 2019-01-08 | 2019-06-21 | 同济大学 | A kind of angular-point detection method based on dynamic visual sensor |
CN109948725A (en) * | 2019-03-28 | 2019-06-28 | 清华大学 | Based on address-event representation neural network object detecting device |
WO2020207982A1 (en) * | 2019-04-09 | 2020-10-15 | Aictx Ag | Event-driven spiking convolutional neural network |
CN110176028A (en) * | 2019-06-05 | 2019-08-27 | 中国人民解放军国防科技大学 | Asynchronous corner detection method based on event camera |
CN111310760A (en) * | 2020-02-13 | 2020-06-19 | 辽宁师范大学 | Method for detecting onychomycosis characters by combining local prior characteristics and depth convolution characteristics |
US20210174122A1 (en) * | 2020-12-17 | 2021-06-10 | Intel Corporation | Probabilistic sampling acceleration and corner feature extraction for vehicle systems |
CN112633497A (en) * | 2020-12-21 | 2021-04-09 | 中山大学 | Convolutional pulse neural network training method based on reweighted membrane voltage |
CN112966814A (en) * | 2021-03-17 | 2021-06-15 | 上海新氦类脑智能科技有限公司 | Information processing method of fused impulse neural network and fused impulse neural network |
CN113066104A (en) * | 2021-03-25 | 2021-07-02 | 三星(中国)半导体有限公司 | Angular point detection method and angular point detection device |
CN113221855A (en) * | 2021-06-11 | 2021-08-06 | 中国人民解放军陆军炮兵防空兵学院 | Small target detection method and system based on scale sensitive loss and feature fusion |
CN113313240A (en) * | 2021-08-02 | 2021-08-27 | 成都时识科技有限公司 | Computing device and electronic device |
Non-Patent Citations (5)
Title |
---|
DERMOT KERR 等: "Spiking Hierarchical Neural Network for Corner Detection", 《IJCCI 2011 COMPUTER SCIENCE》 * |
PILAR BACHILLER-BURGOS 等: "A Spiking Neural Model of HT3D for Corner Detection", 《FRONT COMPUT NEUROSCI》 * |
SABER MORADI 等: "A scalable multi-core architecture with heterogeneous memory structures for Dynamic Neuromorphic Asynchronous Processors (DYNAPs)", 《ARXIV:1708.04198V2》 * |
孔德磊 等: "基于事件的视觉传感器及其应用综述", 《信息与控制》 * |
张宇新: "DVS事件流特征提取及分类方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116030535A (en) * | 2023-03-24 | 2023-04-28 | 深圳时识科技有限公司 | Gesture recognition method and device, chip and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113516676B (en) | 2021-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Camunas-Mesa et al. | An event-driven multi-kernel convolution processor module for event-driven vision sensors | |
CN110135243B (en) | Pedestrian detection method and system based on two-stage attention mechanism | |
CN111667399B (en) | Training method of style migration model, video style migration method and device | |
CN108416266B (en) | Method for rapidly identifying video behaviors by extracting moving object through optical flow | |
US11157764B2 (en) | Semantic image segmentation using gated dense pyramid blocks | |
Mumuni et al. | CNN architectures for geometric transformation-invariant feature representation in computer vision: a review | |
Chen et al. | Adaptive convolution for object detection | |
CN108764244B (en) | Potential target area detection method based on convolutional neural network and conditional random field | |
CN110176028B (en) | Asynchronous corner detection method based on event camera | |
US20220383521A1 (en) | Dense optical flow calculation system and method based on fpga | |
Pavel et al. | Recurrent convolutional neural networks for object-class segmentation of RGB-D video | |
CN113516676B (en) | Angular point detection method, impulse neural network processor, chip and electronic product | |
Gao et al. | Superfast: 200× video frame interpolation via event camera | |
CN112801933A (en) | Object detection method, electronic device and object detection system | |
US11704894B2 (en) | Semantic image segmentation using gated dense pyramid blocks | |
Liu et al. | Sensing diversity and sparsity models for event generation and video reconstruction from events | |
CN114972492A (en) | Position and pose determination method and device based on aerial view and computer storage medium | |
CN115205793B (en) | Electric power machine room smoke detection method and device based on deep learning secondary confirmation | |
Liu et al. | On-sensor binarized fully convolutional neural network with a pixel processor array | |
Zhu et al. | A real-time image recognition system using a global directional-edge-feature extraction VLSI processor | |
Zhang et al. | Multi-scale spatial context features using 3-d recurrent neural networks for pedestrian detection | |
Kollmitz et al. | Predicting obstacle footprints from 2D occupancy maps by learning from physical interactions | |
Li et al. | A Scalable Network for Tiny Object Detection Based on Faster RCNN | |
Bakó et al. | Displacement detection method in video feeds using a distributed architecture on SoC platform for real-time control applications | |
Wright et al. | Computational image processing for a computer vision system using biomimetic sensors and eigenspace object models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231115 Address after: 610, Floor 6, Block A, No. 2, Lize Middle Second Road, Chaoyang District, Beijing 100102 Patentee after: Zhongguancun Technology Leasing Co.,Ltd. Address before: 641400 18th floor, China Europe center, No. 1577, Tianfu Avenue, high tech Zone, Chengdu, Sichuan Patentee before: Chengdu Shizhi Technology Co.,Ltd. Patentee before: Shanghai Shizhi Technology Co.,Ltd. |