CN109800872B - Neuromorphic processor based on segmented multiplexing and parameter quantification sharing - Google Patents

Neuromorphic processor based on segmented multiplexing and parameter quantification sharing Download PDF

Info

Publication number
CN109800872B
CN109800872B CN201910078948.1A CN201910078948A CN109800872B CN 109800872 B CN109800872 B CN 109800872B CN 201910078948 A CN201910078948 A CN 201910078948A CN 109800872 B CN109800872 B CN 109800872B
Authority
CN
China
Prior art keywords
neuron
data
neuromorphic
unit
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910078948.1A
Other languages
Chinese (zh)
Other versions
CN109800872A (en
Inventor
胡绍刚
刘夏恺
张成明
乔冠超
刘洋
于奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910078948.1A priority Critical patent/CN109800872B/en
Publication of CN109800872A publication Critical patent/CN109800872A/en
Application granted granted Critical
Publication of CN109800872B publication Critical patent/CN109800872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

A neuromorphic processor based on segmented multiplexing and parameter quantization sharing belongs to the technical field of neuromorphic hardware. The quantitative parameter control module is used for directly reading the current synapse type from the outside in the operation stage of the neuromorphic processor and configuring a neuron computing unit according to the read synapse type; the synchronous reset module is used for globally resetting the neuromorphic processor; the neuron operation core module is used for executing the neuromorphic calculation, temporarily storing input data and a routing operation result, inputting input pulse data and synapse types of the neuron operation core module in a segmented mode, outputting each segment of input pulse data and synapse types to the next stage of neuron operation core module after the calculation is finished, and multiplexing each neuron operation core module for multiple times by adopting a time-sharing multiplexing method. The invention greatly reduces the data and the processor area which need to be directly stored on the neuromorphic processor, greatly improves the calculation efficiency of the processor and reduces the calculation power consumption.

Description

Neuromorphic processor based on segmented multiplexing and parameter quantification sharing
Technical Field
The invention belongs to the technical field of neuromorphic hardware, and relates to a large-scale neuromorphic processor, a design method and a chip based on segmented multiplexing and parameter quantitative sharing.
Background
In 1989, c.mead first proposed the concept of "neuromorphic computing". Neuromorphic computing, also known as neuromorphic engineering (neuromorphic hard ware), is a reverse engineering of the brain, and aims to construct the basic units of the brain nervous system, namely neurons and synapses, by adopting memristors and threshold switches or analog, digital and digital-analog hybrid technologies, and organize the neurons and synapses by taking the brain nerve cell organization mode as a reference, thereby constructing large-scale neuromorphic hardware, realizing the information processing capability similar to the biological nervous system, and achieving low power consumption, low resource consumption and high adaptability as much as possible.
The existing neuromorphic hardware needs to store parameters with the number equal to the number of neurons in the neuromorphic network to be calculated in advance to complete the calculation operation of the network. Generally, a neuromorphic network capable of realizing an actual function has at least two layers of networks, and the more complex the function to be realized, the larger the scale of the network, and a complex neural network constructed by neuromorphic hardware needs to consume a large amount of on-chip storage space to store various parameters of the network.
With the continuous expansion of the scale of the neuromorphic network, the data volume of the parameters such as the weight, the threshold, the projection delay, the refractory period and the like used for configuring the neurons is also larger and larger, and how to effectively express and store the parameters of the neuromorphic network becomes an important factor for restricting the realization of the hardware-based large-scale neuromorphic network under the limitation of limited on-chip storage resources of the neuromorphic hardware.
Disclosure of Invention
The invention provides a large-scale neuromorphic processor based on segmented multiplexing and parameter sharing, aiming at the problem that the conventional neuromorphic processor needs a large amount of storage space to store network parameters.
The technical scheme of the invention is as follows:
a neuromorphic processor based on segmented multiplexing and parameter quantization sharing comprises a synchronous reset module 5, a quantization parameter control module 10 and a plurality of cascaded neuron operation core modules 12:
the synchronous reset module 5 is used for globally resetting the neuromorphic processor;
the neuron operation core module 12 includes:
at least one data buffer unit 18, where the data buffer unit 18 includes a parameter buffer unit 26 and an input buffer unit 25, the parameter buffer unit 26 is configured to buffer the neuron configuration parameters 14 and buffer the synapse types 28 in segments, and the input buffer unit 25 is configured to buffer input data 27 in segments, where the input data 27 of the first-level neuron operation core module 12 in cascade connection is input pulse data of the neuromorphic processor, and the input data 27 of the remaining neuron operation core modules 12 is a pulse data packet output by the previous-level neuron operation core module 12;
at least one neuron computing unit 22 for performing neuromorphic computation on each segment of input data 27 buffered by the input buffer unit 25;
at least one pulse data routing unit 23, configured to receive a pulse data packet formed by the pulse data obtained by completely calculating each segment of the input data 27 by the neuron calculating unit 22, and route the pulse data packet to the next-stage neuron operation core module 12;
at least one time-division multiplexing control unit 19 for detecting an operation state of the neuron computation unit 22 and adopting a time-division multiplexing strategy to control the neuron computation unit 22;
the neuron computing unit 22 in the last stage of neuron operation core module 12 in the cascade connection outputs pulse data after all the segments of the input data 27 are computed, and the pulse data is used as an output signal of the neuromorphic processor;
the quantization parameter control module 10 is configured to, in the operation stage of the neuromorphic processor, directly read a current synapse type from outside the neuromorphic processor, and read a weight of the neuron configuration parameter 14 corresponding to the synapse type from the data caching unit 18 according to the read synapse type to configure the neuron computing unit 22.
Specifically, the quantization parameter control module 10 includes core enabling flag registers having the same length as the number of the neuron operational core modules 12, and is configured to enable the neuron operational core modules 12.
Specifically, the input data 27 is a frequency-coded and time-coded pulse sequence, such as a poisson distribution pulse sequence, and the neuron configuration parameter 14 is a quantized value of a parameter of a neuromorphic network corresponding to the neuromorphic processor, where the parameter includes, but is not limited to, a synaptic connection state, a weight, a threshold, a leakage constant, a set voltage, a refractory period, and a synaptic delay.
The neuron configuration parameters 14 are quantized by an off-chip off-line quantization method, each quantized value of each parameter after quantization is stored, and the quantized values are stored in the data cache unit 18 by adopting a quantization format, wherein the quantization format is < quantization value, serial number >, the quantized values in the quantization format are quantized parameters, and the serial number is the serial number of the quantized values in a class of quantized values.
Specifically, the time-division multiplexing control unit 19 of the cascaded last-stage neuron operational core module 12 is configured to detect an operating state of the neuron computing unit 22 and adopt a time-division multiplexing strategy to control the neuron computing unit 22, and includes a small storage module configured to store pulse data that is output after all the segments of the input data 27 are computed by the neuron computing unit 22 in the last-stage neuron operational core module 12 and serve as an output signal of the neuromorphic processor.
Specifically, the pulse data routing unit 23 is configured to receive a pulse data packet 33 generated when one operation of the neuron operation core module 12 is finished and route the pulse data packet 33, and includes four synchronous first-in first-out memories (FIFOs) configured to buffer data received from other external neuron operation cores.
Specifically, the data cache unit 18 uses an on-chip register to perform real-time storage, so as to save the storage space of the processor.
The processing method of the neuromorphic processor provided by the invention comprises the following steps:
step 1, an upper computer 7 gives clock input to provide a main clock for a parameter quantification sharing nerve form processor;
step 2, the synchronous reset module 7 provides a reset signal 6 to initialize the states of all modules in the system;
step 3, starting the quantization parameter control module 10 to initialize the data cache unit 18, inputting the core enabling mark into the quantization parameter control module 10 from the outside, and selecting the neuron operation core module 12 to be used;
step 4, reading the core enabling mark, determining the neuron operation core module 12 to be enabled, and reducing unnecessary configuration of the neuron operation core module 12;
step 5, the quantitative parameter control module 10 selects a data exchange mode of the neuron operation core module 12, wherein the data exchange mode is divided into interaction with other cores or interaction with the upper computer 7, so that the cascade sequence is determined;
step 6, the quantitative parameter control module 10 reads parameters (synapse type, synapse connection state, weight, threshold, leakage constant, set voltage, refractory period, synapse delay and the like) from the exterior of the neuromorphic processor, and writes the parameters into a data cache unit 18 of the selected neuron operation core module 12 for configuration, and the data cache unit 18 stores the parameters into the synapse type in a segmented manner;
step 7, inputting a section of pulse of data from the upper computer 7 to a data cache module 18 in a first-level neuron operation core module 12 interacting with the off-chip memory 13, wherein the neuron operation core module 12 interacting with other neuron operation core modules 12 receives a pulse data packet sent from the upper-level neuron operation core module 12 through a pulse data routing unit 23, and the data cache module 18 stores the pulse data in sections;
step 8, starting the selected neuron operation core module 12 and starting the calculation of the current neuron;
step 9, the quantitative parameter control module 10 reads the state and type of the current synapse of the current neuron from the exterior of the neuromorphic processor;
step 10, the quantization parameter control module 10 controls the data cache unit 18 to read corresponding quantization parameters from the outside, determines the weight of the neuron operation core module 12 according to the selected synapse type, and configures the neuron operation core module 12;
step 11, carrying out neuromorphic calculation on pulse data stored in sections by a neuron calculation unit 22 in the selected neuron operation core module 12, finishing one-time neuromorphic calculation after the calculation of each section of stored pulse data is finished, and judging whether to issue pulses, wherein the first-stage neuron operation core module 12 is used for receiving external pulse data given by an upper computer 7, and the rest neuron operation core modules 12 are used for receiving the pulse data after the calculation of the first-stage neuron operation core module 12 is finished;
step 12, judging whether the neuron operation core module 12 finishes calculating the cached pulse segment, if so, skipping to step 13, otherwise, skipping to step 8;
step 13, judging whether the neuron operation core module 12 finishes calculating all segments of a neuron, if so, jumping to step 14, otherwise, jumping to step 7;
step 14, the neuron computing unit 22 generates a pulse data packet and sends the pulse data packet to the next stage neuron operation core module 12 through the pulse data routing unit 23;
step 15, starting the time-sharing multiplexing control unit 19 to switch the neurons, and performing time-sharing multiplexing on each neuron operation core module 12 through a time-sharing multiplexing strategy;
step 16, judging whether the neuron operation core module 12 finishes calculating all neurons of the layer, if so, skipping to step 17, otherwise, skipping to step 6;
and step 17, clearing the data in the data cache unit 18 and ending the operation of the neuromorphic processor.
The invention has the beneficial effects that: aiming at the problem of low storage efficiency of the neuromorphic processor, the parameter is quantized by an off-chip off-line quantization method, data which needs to be directly stored on the processor is greatly reduced on the basis of time division multiplexing and segmentation multiplexing, the area of the processor is reduced, and the power consumption of the processor is reduced.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a diagram of a digitized discrete-time neuron model.
Fig. 2 is a schematic diagram of a top-level structure of a neuromorphic processor based on segment multiplexing and parameter quantization sharing according to the present invention.
Fig. 3 is a schematic diagram of the internal structure of the neuron operation core module 12 in the neuromorphic processor based on the segmented multiplexing and the parameter quantization sharing according to the present invention.
Fig. 4 is a schematic diagram of time-division multiplexing of a neuromorphic processor based on segmented multiplexing and parameter quantization sharing according to the present invention.
Fig. 5 is a schematic diagram of segment multiplexing of a neuromorphic processor based on segment multiplexing and parameter quantization sharing according to the present invention.
Detailed Description
When researching the neuromorphic processor, the neuromorphic processor needs to store parameters with the number equal to the number of neurons in the neuromorphic network needing to be calculated in advance to complete the calculation operation of the network. Such data consumes excessive working time, occupies a large amount of on-chip storage space, and reduces computational efficiency during loading, storing, and the like.
The neuromorphic processor provided by the invention enables all neurons in the network to share several types of parameters by quantizing the parameters in the neuromorphic network, thereby achieving the purposes of reducing the parameters of the neuromorphic network, saving the storage space of the neuromorphic processor and improving the calculation efficiency of the neuromorphic processor. Meanwhile, by adopting the sectional calculation, all input data of one neuron does not need to be stored into the on-chip storage space at one time, only the input data needs to be stored into the on-chip storage space in a sectional manner, and the next section of the input data is extracted after one section is calculated.
Fig. 2 shows a large-scale neuromorphic processor based on segmented multiplexing and parameter quantization sharing, which includes:
at least one quantization parameter control module 10, configured to, in a neuromorphic processor operation phase, directly read a current synapse type from outside the neuromorphic processor, and read a current quantization parameter from a data cache unit 18 in the neuromorphic processor core module 12 to configure the neuromorphic processor core module 12, where the purpose of directly reading the current synapse type from outside the neuromorphic processor by the quantization parameter control module 10 is to make the processor no longer directly store the parameter type, and the current neuron parameter is read from outside by the data cache unit 18 to further save the processor storage space; in addition, the quantization parameter control module 10 is also used for receiving input pulse data of the neuromorphic processor output by the external upper computer 7, transmitting the input pulse data to the first-stage neuron operation core module 12, receiving pulse data stored in the time division multiplexing control unit 19 in the last-stage neuron operation core module 12 after the processing is finished, and outputting the pulse data to the upper computer 7; the quantization parameter control module 10 includes core enabling flag registers having the same length as the number of the neuron operational core modules 12, and is configured to enable the neuron operational core modules 12 and select a data exchange mode of the neuron operational core modules 12.
And the synchronous reset module 5 is used for generating a reset signal 6 and transmitting the reset signal to the quantization parameter control module 10 and the neuron operation core module 12 to realize system global reset.
At least one neuron operation core module 12 can perform neuromorphic calculation, temporarily store input data, and route pulse data packets generated when the operation is finished. Fig. 3 shows an internal structural diagram of the neuron arithmetic core module 12, which includes at least one neuron calculation unit 22, at least one data buffer unit 18, at least one time division multiplexing control unit 19, and at least one pulse data routing unit 23.
The parameters extracted and used by the neuromorphic processor are quantized values which are trained and quantized, the parameters of the neuromorphic network can be quantized into a quantization format outside the chip by adopting an off-line training mode, and the quantized values are transmitted to the data cache unit 18 on the chip through the parallel input interface.
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail by referring to specific embodiments in the accompanying drawings, it being understood that the specific embodiments described herein are only for explaining the present invention and are not intended to limit the present invention.
The invention aims to provide a large-scale neuromorphic processor based on segmented multiplexing and parameter sharing, which introduces a segmented multiplexing and parameter quantification sharing structure into the neuromorphic processor, thereby reducing the on-chip storage overhead, reducing the area consumption of the processor and enabling the neuromorphic processor to have higher performance.
The neuromorphic processor provided by the invention is based on a storage-control-calculation-routing structure:
the storage structure is used for caching the quantization parameters and the input data;
the control structure is used for controlling the process of loading external data into the memory structure, the process of reading the data in the memory and sending the data to the computing structure and the process of calculating the neural morphological network;
the computational structure includes an arithmetic logic unit and a comparison logic unit for performing neuromorphic computations in the processor.
FIG. 1 is a digital discrete-time neuron model used by the neuron computational core module. The pulse train 1 is data of an original characteristic diagram after frequency coding, each axon 2 and a neuron 4 form a synapse 3, the neuron 4 calculates each synapse 3 in sequence, and when all synapses 3 are calculated, the neuron 4 judges whether the membrane potential meets a pulse issuing condition, and if so, issues a pulse. The neuron computational unit 22 in the present invention is built according to the model shown in fig. 1.
Fig. 2 is a top-level structural diagram of the neuromorphic processor proposed by the present invention. The synchronous reset module 5 sends a reset signal 6 to perform global reset, the upper computer 7 inputs a clock signal 8 to the neuromorphic processor, the neuromorphic processor communicates with the quantization parameter control module 10 through an external control signal 9, the quantization parameter control module 10 controls the neuron operation core module 12 (taking exemplified nine neuron operation core modules 12 as examples in this embodiment) to perform parameter configuration through a neuron operation core module control signal 11, the neuron operation core module 12 reads neuron configuration parameters 14 from an external memory 13 into the data cache unit 18 to perform configuration, the upper computer 7 inputs the temporarily stored input pulse and synapse type 29 into the data cache unit 18 of the neuron operation core module 12 after passing the input pulse and synapse type 15 through the quantization parameter control module 10, wherein the input pulse and synapse type 29 comprises input data 27 and neuron synapse type 28, the neuron calculation unit 22 in the neuron operation core module 12 calculates the input data 27, the calculation result 33 output after one round of calculation is stored in the pulse data routing unit 23, the pulse data routing unit 23 sends the stored pulse data to the next-level neuron operation core module 12 as the quantization parameter data 16, and outputs all the input data to the next-level neuron operation core module 12 after one round of calculation is completed, and the calculation result 33 is output to the next-level neural operation data control module 10, and the next-level neural operation data is output by the upper computer 17.
Because the output data 30 is stored by the quantization parameter control module 10, and the bit width of the output data 17 outputted again changes, the output data 30 and the output data 17 are represented by different reference numbers because of the change of the bit width although they are the same signals; similarly, data such as the neuron configuration parameters 14 and 31, the input pulse and synapse types 32 and 29, and the pulse data 16 and 34 are denoted by different reference numerals due to bit width changes.
Fig. 3 is a schematic structural diagram of the neuron operation core module 12 according to the present invention. The quantization parameter control module 10 inputs an input pulse and a synapse type 29 into the data cache unit 18, the time-sharing multiplexing control unit 19 enables the data cache unit 18 to read in a neuron configuration parameter 14 from the off-chip memory 13 through a data cache unit control signal 20, inputs a cached neuron configuration parameter 31 from the data cache unit 18 into the neuron computing unit 22 for configuration through a neuron computing unit control signal 21, the time-sharing multiplexing control unit 19 controls the neuron computing unit 22 to read in the cached input pulse and synapse type 32 from the data cache unit 18 through the neuron computing unit control signal 21 after the start of the computation to perform the computation, and the neuron computing unit 22 sends a computing result 33 to the next-level neuron computing core module 12 through the pulse data routing unit 23 after completing one round of computation and receives an output pulse data 34 sent from the previous-level neuron computing core module for a new round of computation at the start of the next round of computation. After all the calculations are finished, the final output data 30 is sent to the quantization parameter control module 10, and the temporarily stored output data 17 is transmitted to the upper computer 7 by the quantization parameter control module 10.
The neural operation core module 12 sets the cascade mode through the quantization parameter control module 10, the quantization parameter control module 10 forms the cascade by selecting one or more neural operation core modules 12 in the neural morphology processor to enable and the data transmission mode of each neural operation core module 12, that is, selecting the number of the neural operation core modules 12 to enable before calculation, and setting each neural operation core module 12 to perform data transmission with any one of the neural operation core modules 12, taking any one of the neural operation core modules 12 as a first stage to receive external data, and the output data of the neural operation core modules 12 can be transmitted to any one of the neural operation core modules 12 to form the cascade, so that the working mode of the neural morphology processor is more changeable and is not limited by the number and connection of the specific neural operation core modules 12.
Fig. 4 is a schematic diagram of time-division multiplexing of the neuron operational core module 12 according to the present invention. An instantiated neuron computing unit 22 (physical neuron) in the neuron operational core module 12 is time-division multiplexed for P times, namely, the neuron computing unit is equivalent to a neuron layer containing P neuron computing units (equivalent logic neurons 24 generated by time-division multiplexing), and by taking the case that one neuron operational core module 12 contains one neuron computing unit 22, a large-scale neural network which is composed of P neurons in each layer and has a Q-layer in common can be constructed by instantiating Q neuron operational core modules 12 and time-division multiplexing for P times, namely, a large-scale neural network constructed by P × Q neurons, P, Q are positive integers.
Fig. 5 is a schematic diagram of the segmentation multiplexing of the neuron operation core module according to the present invention. And on the basis of time division multiplexing, carrying out sectional operation on the input data. The storage space in the data buffer unit 18 is divided into an input buffer unit 25 and a parameter buffer unit 26, the input buffer unit 25 is used for storing input data 27, the parameter buffer unit 26 is used for storing neuron configuration parameters 14 and neuron synapse types 28, the neuron configuration parameters 14 are input before each calculation, when the neuron calculation unit 22 calculates all segments of the input data 27, the input is performed again, and the neuron synapse types 28 are directly read from the exterior of the neuromorphic processor by the quantization parameter control module 10 at the operation stage of the neuromorphic processor and are stored in the parameter buffer unit 26 in segments. In operation, the neuron configuration parameters 14 are read from the off-chip memory 13 into the parameter cache unit 26, and the neuron configuration parameters 14 are read into the neuron calculation unit 22 at the start of calculation and configured. Assuming that the total length of input data is M, the maximum length of the input data that can be stored in the data buffer unit 18 is N, where N < M, only the input data 27 with the length of N is read and stored in the input buffer unit 25, N neuron synapse types 28 are read and stored in the parameter buffer unit 26, in the calculation process, the input data 27 and the neuron synapse types 28 of each synapse of the segment are read from the data buffer unit 18, and each synapse is calculated one by one until all synapses of the segment are calculated. After the input data is calculated, the next input data with the length of N is input to continue calculation, complete data is read for several times, and a final result is obtained after all the segmentation calculations are finished.
In conclusion, the invention enables each layer or a plurality of layers or all layers of the network to share the quantization parameters by quantizing the parameters in the neural morphological network; the input data is stored in a segmented mode, only one segment of the input data is stored in the on-chip storage space each time, and the temporary storage is emptied and stored in the next segment after the data of the segment is calculated; the neurons are multiplexed in a time-sharing manner, and a single neuron operation unit is multiplexed for multiple times to form a layer of neurons, so that the aims of reducing parameters of a neuromorphic network, saving the storage space of a neuromorphic processor, improving the calculation efficiency of the neuromorphic processor and reducing the calculation power consumption are fulfilled.

Claims (5)

1. A neuromorphic processor based on segmented multiplexing and parameter quantization sharing is characterized by comprising a synchronous reset module (5), a quantization parameter control module (10) and a plurality of cascaded neuron operation core modules (12):
the synchronous reset module (5) is used for globally resetting the neuromorphic processor;
the neuron operation core module (12) includes:
at least one data buffer unit (18), wherein the data buffer unit (18) comprises a parameter buffer unit (26) and an input buffer unit (25), the parameter buffer unit (26) is used for buffering neuron configuration parameters (14) and buffering synapse types (28) in a segmented manner, and the input buffer unit (25) is used for buffering input data (27) in a segmented manner, wherein the input data (27) of a first-stage neuron operation core module (12) in cascade connection is input pulse data of the neuromorphic processor, and the input data (27) of the rest neuron operation core modules (12) are pulse data packets output by a previous-stage neuron operation core module (12);
at least one neuron computing unit (22) for performing a neuromorphic computation on each segment of the input data (27) buffered by the input buffer unit (25);
at least one pulse data routing unit (23) for receiving pulse data which is obtained by completely calculating each segment of the input data (27) by the neuron calculating unit (22) to form a pulse data packet, and routing the pulse data packet to a next-stage neuron operation core module (12);
at least one time-multiplexed control unit (19) for detecting an operating state of the neuron computing unit (22) and adopting a time-multiplexed strategy to control the neuron computing unit (22);
the neuron computing unit (22) in the last stage of neuron operation core module (12) of the cascade connection outputs pulse data after all the segments of the input data (27) are computed to be used as an output signal of the neuromorphic processor;
the quantization parameter control module (10) is configured to read a current synapse type directly from outside the neuromorphic processor in the neuromorphic processor operating phase, and read a weight of the neuron configuration parameter (14) corresponding to the synapse type from the data caching unit (18) according to the read synapse type to configure the neuron computing unit (22).
2. The neuromorphic processor based on segmented multiplexing and parameter quantization sharing of claim 1, wherein the quantization parameter control module (10) comprises a core enable flag register of the same length as the number of the neuron operational core modules (12) for enabling the neuron operational core modules (12).
3. The neuromorphic processor based on segmented multiplexing and parameter quantization sharing of claim 1, wherein the input data (27) is a frequency-coded, time-coded pulse sequence, and the neuron configuration parameters (14) are quantized values of parameters of a neuromorphic network corresponding to the neuromorphic processor, including but not limited to synaptic connection status, weight, threshold, leakage constant, set voltage, refractory period, synaptic delay.
4. The neuromorphic processor based on segment multiplexing and parameter quantization sharing of claim 1, wherein the time-sharing multiplexing control unit (19) of the last stage neuron operational core module (12) of the cascade comprises a small-sized storage module for storing pulse data outputted after the neuron computing unit (22) in the last stage neuron operational core module (12) has completely computed each segment of the input data (27) and using the pulse data as the output signal of the neuromorphic processor.
5. The neuromorphic processor based on segmented multiplexing and parameter quantization sharing of claim 1, characterized in that the pulse data routing unit (23) comprises four synchronous first-in-first-out memories.
CN201910078948.1A 2019-01-28 2019-01-28 Neuromorphic processor based on segmented multiplexing and parameter quantification sharing Active CN109800872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910078948.1A CN109800872B (en) 2019-01-28 2019-01-28 Neuromorphic processor based on segmented multiplexing and parameter quantification sharing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910078948.1A CN109800872B (en) 2019-01-28 2019-01-28 Neuromorphic processor based on segmented multiplexing and parameter quantification sharing

Publications (2)

Publication Number Publication Date
CN109800872A CN109800872A (en) 2019-05-24
CN109800872B true CN109800872B (en) 2022-12-16

Family

ID=66560405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910078948.1A Active CN109800872B (en) 2019-01-28 2019-01-28 Neuromorphic processor based on segmented multiplexing and parameter quantification sharing

Country Status (1)

Country Link
CN (1) CN109800872B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368981B (en) * 2020-03-06 2021-07-09 上海新氦类脑智能科技有限公司 Method, apparatus, device and storage medium for reducing storage area of synaptic connections
CN113537449B (en) 2020-04-22 2024-02-02 北京灵汐科技有限公司 Data processing method based on impulse neural network, calculation core circuit and chip

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103516391A (en) * 2012-06-15 2014-01-15 中兴通讯股份有限公司 Multipath detection method and apparatus
CN108364061A (en) * 2018-02-13 2018-08-03 北京旷视科技有限公司 Arithmetic unit, operation execute equipment and operation executes method
CN108846408A (en) * 2018-04-25 2018-11-20 中国人民解放军军事科学院军事医学研究院 Image classification method and device based on impulsive neural networks
CN109165730A (en) * 2018-09-05 2019-01-08 电子科技大学 State quantifies network implementation approach in crossed array neuromorphic hardware

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050136509A1 (en) * 2003-09-10 2005-06-23 Bioimagene, Inc. Method and system for quantitatively analyzing biological samples

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103516391A (en) * 2012-06-15 2014-01-15 中兴通讯股份有限公司 Multipath detection method and apparatus
CN108364061A (en) * 2018-02-13 2018-08-03 北京旷视科技有限公司 Arithmetic unit, operation execute equipment and operation executes method
CN108846408A (en) * 2018-04-25 2018-11-20 中国人民解放军军事科学院军事医学研究院 Image classification method and device based on impulsive neural networks
CN109165730A (en) * 2018-09-05 2019-01-08 电子科技大学 State quantifies network implementation approach in crossed array neuromorphic hardware

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"A strategy for time series prediction using Segment Growing Neural Gas";Jorge R.Vergara;《2017 12th International WSOM》;20170831;全文 *
"基于FPGA的卷积神经网络应用研究";王羽;《中国优秀硕士学位论文全文库》;20170215;全文 *
"神经形态计算发展现状与展望";陈怡然;《人工智能》;20180430;全文 *

Also Published As

Publication number Publication date
CN109800872A (en) 2019-05-24

Similar Documents

Publication Publication Date Title
Li et al. Pipe-SGD: A decentralized pipelined SGD framework for distributed deep net training
CN107169563B (en) Processing system and method applied to two-value weight convolutional network
CN110163016B (en) Hybrid computing system and hybrid computing method
CN109478144A (en) A kind of data processing equipment and method
CN107689948A (en) Efficient data memory access managing device applied to neural network hardware acceleration system
CN109800872B (en) Neuromorphic processor based on segmented multiplexing and parameter quantification sharing
CN111105023B (en) Data stream reconstruction method and reconfigurable data stream processor
CN112950656A (en) Block convolution method for pre-reading data according to channel based on FPGA platform
CN107590085A (en) A kind of dynamic reconfigurable array data path and its control method with multi-level buffer
CN108830379B (en) Neural morphology processor based on parameter quantification sharing
CN113033769B (en) Probabilistic calculation neural network method and asynchronous logic circuit
CN114611686A (en) Synapse delay implementation system and method based on programmable neural mimicry core
US20190286971A1 (en) Reconfigurable prediction engine for general processor counting
RU2294561C2 (en) Device for hardware realization of probability genetic algorithms
CN115879543B (en) Model training method, device, equipment, medium and system
CN115860080B (en) Computing core, accelerator, computing method, apparatus, device, medium, and system
CN110245756A (en) Method for handling the programming device of data group and handling data group
CN111078286B (en) Data communication method, computing system and storage medium
CN113392963B (en) FPGA-based CNN hardware acceleration system design method
KR102380970B1 (en) Neuromodule device and signaling method performed on the same
CN114758699A (en) Data processing method, system, device and medium
KR102402255B1 (en) Multi-core neuromodule device and global routing method performed on the same
KR102380584B1 (en) Neuromodule device and signaling method performed on the same
KR20210004342A (en) Neuromorphic device
Mohamed et al. Fpga-based dnn hardware accelerator for sensor network aggregation node

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant