CN114819122A - Data processing method and device based on impulse neural network - Google Patents

Data processing method and device based on impulse neural network Download PDF

Info

Publication number
CN114819122A
CN114819122A CN202210316689.3A CN202210316689A CN114819122A CN 114819122 A CN114819122 A CN 114819122A CN 202210316689 A CN202210316689 A CN 202210316689A CN 114819122 A CN114819122 A CN 114819122A
Authority
CN
China
Prior art keywords
code
pulse
neural network
pooling
compression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210316689.3A
Other languages
Chinese (zh)
Other versions
CN114819122B (en
Inventor
尹志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202210316689.3A priority Critical patent/CN114819122B/en
Publication of CN114819122A publication Critical patent/CN114819122A/en
Application granted granted Critical
Publication of CN114819122B publication Critical patent/CN114819122B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The application provides a data processing method and device based on a pulse neural network. The method comprises the following steps: compressing each pulse signal sequence generated by input data to obtain each compression code of the input data; and pooling each compressed code through a pulse neural network to obtain target data. According to the data processing method based on the pulse neural network, the pulse signal sequence is compressed to form the compression code, so that the length of the compression code is greatly reduced compared with the length of the pulse signal sequence, and the requirement on a pulse storage space is exponentially reduced during storage. And because the length of the formed compression code is short, the calculation time can be exponentially reduced when the pooling calculation is carried out, the pooling efficiency is improved, and the processing efficiency of the data input into the SNN network model is further improved.

Description

Data processing method and device based on impulse neural network
Technical Field
The application relates to the technical field of a pulse neural network, in particular to a data processing method and device based on the pulse neural network.
Background
With the development of brain science and brain-like computing technology, the Spiking Neural Network (SNN) is considered as a computing model close to a brain information processing mode, can simulate the human brain to process data such as images or voice, and has the advantages of low power consumption and simple structure.
In the related art, when data processing such as image or sound is performed by using a pulse neural network, a plurality of pulse signal sequences are formed on input data such as image frames or voice data based on a pulse frequency coding mode through an SNN network model, and then pooling is performed through the SNN network model, so that the plurality of pulse signal sequences are pooled into one output to obtain a target result.
The pulse signal sequence formed by the pulse frequency coding method is formed by the pulse signal at each time, and the pulse signal sequence formed in this way needs to store the pulse signal at each time in different registers. For example, a pulse signal sequence formed by pulse signals at N times needs a register with a length of N to store, and the SNN network model needs to count each input pulse signal sequence to obtain a final pooling result. However, in practice, the length of the pulse signal sequence is usually long, that is, the value of N is often large, so that the efficiency of pooling the pulse signal sequence is low under the condition of limited hardware resources, and the processing efficiency of data input into the SNN network model is further affected.
Disclosure of Invention
The embodiment of the application provides a data processing method and device based on a pulse neural network, and the processing efficiency of data input into an SNN network model is improved.
In a first aspect, an embodiment of the present application provides a data processing method based on a spiking neural network, including:
compressing each pulse signal sequence generated by input data to obtain each compression code of the input data;
and pooling each compressed code through a pulse neural network to obtain target data.
In one embodiment, compressing each pulse signal sequence generated by input data to obtain each compressed code of the input data comprises:
and compressing each pulse signal sequence in the form of binary codes of the number of pulses to obtain each compressed code.
In one embodiment, the number of coded bits of the compression coding is determined according to the maximum number of pulse signals in each pulse signal sequence.
In one embodiment, pooling each of the compressed codes through a neural network to obtain target data includes:
and performing maximum pooling on each compressed code through the impulse neural network to obtain target data.
In one embodiment, the maximum pooling of each of the compressed codes by the spiking neural network to obtain target data includes:
sequentially shifting each compressed code from the highest code bit of the compressed code to the lowest code bit of the compressed code to perform maximum pooling calculation, and acquiring an output value corresponding to each code bit;
and forming the target data according to the output value of each code bit.
In one embodiment, the output value of the current one of the code bits is:
Figure BDA0003569180410000021
wherein n represents the number of compression codes, t represents the time corresponding to the current code bit, and W i (t) represents the weight corresponding to said current code bit of compression encoding i at time t,
Figure BDA0003569180410000022
j≠i,S i (t) represents a pulse value on the current code bit of compression encoding i.
In an embodiment, when the current code bit is the highest code bit, the weight of the current code bit is 1.
In a second aspect, an embodiment of the present application provides a data processing apparatus based on a spiking neural network, including:
the compression coding module is used for compressing each pulse signal sequence generated by input data to obtain each compression code of the input data;
and the data pooling module is used for pooling each compressed code through a pulse neural network to obtain target data.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor and a memory storing a computer program, where the processor implements the steps of the data processing method based on the spiking neural network according to the first aspect when executing the program.
In a fourth aspect, the present application provides a computer program product, which includes a computer program that, when being executed by a processor, implements the steps of the data processing method based on a spiking neural network according to the first aspect.
According to the data processing method and device based on the impulse neural network, each impulse signal sequence generated by input data is compressed, each compression code of the input data is obtained, each compression code is pooled through the impulse neural network, target data is obtained, and therefore the impulse signal sequences are compressed to form compression codes, the length of the compression codes is greatly reduced compared with the length of the impulse signal sequences, and therefore the requirement for impulse storage space is exponentially reduced when the compression codes are stored. And because the length of the formed compression code is short, the calculation time can be exponentially reduced when the pooling calculation is carried out, the pooling efficiency is improved, and the processing efficiency of the data input into the SNN network model is further improved.
Drawings
In order to more clearly illustrate the technical solutions in the present application or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of a data processing method based on a spiking neural network according to an embodiment of the present application;
FIG. 2 is a diagram of a spiking neural network pooled neuron in the related art;
FIG. 3 is a schematic diagram of a spiking neural network pooled neuron provided by an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a data processing apparatus based on a spiking neural network according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
To make the purpose, technical solutions and advantages of the present application clearer, the technical solutions in the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, it is a schematic flow chart of a data processing method based on an impulse neural network according to an embodiment of the present invention, where the method is applied in a server for processing data input to the impulse neural network, such as image frames or voice data. As shown in fig. 1, a data processing method based on a spiking neural network provided in this embodiment includes:
step 101, compressing each pulse signal sequence generated by input data to obtain each compression code of the input data;
and step 102, pooling each compressed code through a pulse neural network to obtain target data.
The pulse signal sequence is compressed to form compression coding, so that the length of the compression coding is greatly reduced compared with that of the pulse signal sequence, and the requirement on a pulse storage space is exponentially reduced during storage. And because the length of the formed compression code is short, the calculation time can be exponentially reduced when the pooling calculation is carried out, the pooling efficiency is improved, and the processing efficiency of the data input into the SNN network model is further improved.
In the related art, when processing input data to the impulse neural network, the impulse neural network represents that a plurality of pulse signals generated by the input data in a certain period of time form 0/1(1 represents that there is a pulse and 0 represents that there is no pulse) pulse signal sequence with a length of N to represent that a certain pulse signal is input in N times. The signal strength of the pulse signal sequence is determined by the number of pulses on the signal line in the period of time, wherein the larger the number is, the stronger the signal is, and the smaller the number is, the weaker the signal is. It is clear that representing a signal of great strength in this way requires a very long pulse signal sequence. Meanwhile, the pulse input values at the N moments need to be stored by using a register with the length of N, and in practical application, the value of N often reaches 128 or 1024 or even higher, which results in a large demand for registers. And the pulse frequency coding formed by the pulse signal sequence needs 128 bits or even higher bits, and the efficiency of performing pooling calculation on the pulse signals is low under the condition that hardware resources are limited.
Considering that the pulse frequency coding represents the signal strength by the number of pulses, and the signal strength is independent of the specific positions of the pulses, in an embodiment, each pulse signal sequence generated by the input data may be compressed in the form of M-ary value of the number of each pulse signal in the pulse signal sequence to obtain the compressed pulse frequency coding, i.e., the compression coding. The input data may be image frames or voice data, etc.
In order to make the required storage space small enough, in an embodiment, the number of pulse signals in any pulse signal sequence is compressed in a form of two's complement, so as to obtain a binary value of the number of pulse signals as a compression code, so that the length of the compression code is exponentially reduced, and further, the pulse storage space can be exponentially reduced.
In order to make the number of coded bits of each compressed code consistent for facilitating the subsequent pooling calculation, in an embodiment, the number of coded bits of each compressed code is determined according to the maximum number of pulse signals in each pulse signal sequence.
Illustratively, the number of pulse signals is encoded into two's complement for compression, i.e. the maximum number of coded bits B of the pulse signals N is:
B=log 2 (N+1)
for example, when the maximum number N of pulse signals in each pulse signal sequence is 255, B is 8; the pulse signal sequence with the pulse number of 20 is compressed into the compression coding of 0b00010100 by the complement of binary code, and the pulse signal sequence with the pulse number of 255 is compressed into the compression coding of 0b 11111111111 by the complement of binary code. Therefore, the compression coding is compressed to 3.1% of the original compression coding, and meanwhile, the register is changed from the originally required 225-bit register to the 8-bit register for storage, so that the storage space is greatly saved.
Taking input data as an image frame as an example, inputting the image frame into a pulse neural network in real time, performing pulse conversion on the image frame through the pulse neural network to obtain a plurality of pulse signal sequences of the image frame, and then compressing the number of pulses of each pulse sequence converted from the image frame through the pulse neural network to obtain each compression code of the image frame.
In one embodiment, after each compression code is obtained by compression, each compression code is pooled by a pooling neuron in the impulse neural network, and the compression codes of a plurality of inputs are pooled into one output, namely target data, by the pooling neuron.
Common types of pooling calculations are mean pooling and maximum pooling. The mean pooling outputs the mean of all inputs, which can be generally replaced by integrating neurons with all inputs weighted 1/n (n is the number of inputs); the maximum pooling requires selecting the input with the largest number of pulses for output, and each input needs to be counted and then compared to the maximum.
In the related art, a spiking neural network pooled neuron is shown in fig. 2. Since the length of the pulse signal is N, and when the maximum pooling calculation is performed, it is necessary to count each input and then compare the maximum, so that the maximum pooling calculation is complicated and not good to implement, and therefore, the conventional SNN network generally does not use the maximum pooling neuron.
Therefore, in an embodiment, after each compression code is obtained by compression, because the number of coding bits of each compression code is small, each compression code can be subjected to maximum pooling through a pulse neural network to obtain target data, and time required by maximum pooling calculation for pulse signals is exponentially reduced, so that the technical problem that the conventional SNN network cannot rapidly perform maximum pooling is solved.
In one embodiment, the maximum value pooling of each of the compressed codes by the impulse neural network to obtain target data includes:
sequentially shifting each compressed code from the highest code bit of the compressed code to the lowest code bit of the compressed code to perform maximum pooling calculation, and acquiring an output value corresponding to each code bit;
and forming the target data according to the output value of each code bit.
In the maximum value pooling calculation, the maximum value is selected in a mode that the highest bit to the lowest bit of the compression code are sequentially input in a pulse form for comparison, if the pulse value is large when the high bit is 1, the pulse value is left, and if the pulse value is small when the high bit is 0, the pulse value is eliminated; if the values are all 0, the values are not eliminated; until a maximum pulse value is selected, or compared to the lowest bit leaving a plurality of pulse values of the same maximum value. And sequentially outputting the maximum values of comparison while sequentially comparing and selecting from high to low, namely the output result of the maximum value pooling. Illustratively, each compression code is "011001", "001110" or "010110", respectively, wherein the leftmost code bit in the compression code is the highest bit and decreases from left to right in sequence, that is, the rightmost code bit is the lowest bit. When the maximum pooling calculation is performed, the highest pulse value "0" in "011001", the highest pulse value "0" in "001110" and the highest pulse value "0" in "010110" are compared; if the pulse values of the three are all 0, the three are not eliminated, and the maximum value '0' corresponding to the maximum value of the highest code bit after pooling calculation is output. Then, shifting the whole of each compression code to the left to obtain three parts, namely a pulse value "1" of the next code bit in "011001", a pulse value "0" of the next code bit in "001110" and a pulse value "1" of the next code bit in "010110", wherein "1" in the three pulse values indicates that the pulse value is large, the pulse value is left, the pulse value "0" is eliminated, namely the pulse values "011001" and "010110" are left, the pulse value "001110" is eliminated, and the maximum value "1" obtained after the code bit is subjected to maximum value pooling calculation is used as an output value corresponding to the code bit. Then, the compression code that is not eliminated is shifted to the left as a whole, and the code bit "1" in "011001" and the code bit "0" in "010110" are obtained, but the pulse value of the corresponding code bit is not obtained because "001110" is eliminated. Then, the code bit "1" in the acquired "011001" and the code bit "0" in the acquired "010110" are compared, wherein "1" in the two indicates that the pulse value is large, and "0" is eliminated, that is, "011001" is left, and "010110" is eliminated, and the maximum value "1" obtained after the maximum value of the code bit is subjected to pooling calculation is used as the output value corresponding to the code bit. And analogizing in sequence until the comparison of the lowest code bit is completed and the corresponding output value after the maximum value pooling calculation of the code bit is output, combining the output values according to the corresponding code bits, and forming new compressed codes, namely target data.
In one embodiment, the maximum pooling calculation for each compression encoding may be implemented by a pulse maximum pooling neuron as shown in FIG. 3. As shown in fig. 3, the pulse sequence that originally needs N-bit registers to be stored only needs B-bit registers to be stored after compression to form compression codes, and the maximum pooling calculation also needs only B clock cycles. The maximum value pooling neuron sequentially shifts each binary compressed code Sn from the highest bit (B-1) to the lowest bit (0) through an n-pair input AND gate to perform calculation, thereby sequentially outputting an output value of each code bit.
Specifically, the output value of the current code bit in each code bit is:
Figure BDA0003569180410000081
wherein n represents the number of compression codes, t represents the time corresponding to the current code bit, and W i (t) represents the weight corresponding to said current code bit of compression encoding i at time t,
Figure BDA0003569180410000082
j≠i,S i (t) represents a pulse value on the current code bit of compression encoding i. This can mean performing an OR operation on the calculation result, i.e., W 1 (t)&&S 1 (t)、W 2 (t)&&S 2 (t)…W n (t)&&S n (t) and the like, and performing OR operation on the calculation results.
It can be understood that when t is 0, the current code bit is the highest code bit, and when t is 1, the current code bit is the next highest code bit, and so on; and when t is B-1, the current code bit is the lowest code bit.
In one embodiment, W represents a 1-bit register, W i (0) That is, if the current code bit is the most significant code bit, the weight thereof is 1, which indicates that the pulse values of the most significant code bits of all compression codes are compared. If the pulse value of the current code bit of any compressed code is 1, the pulse value is winning, the candidate output value of the code bit of the pulse value of the current code bit is represented, the corresponding weight value is unchanged, the pulse values of the current code bits of other compressed codes are 0, the candidate output value is eliminated, the corresponding weight value is modified to be 0, and meanwhile, the output value corresponding to the current code bit is 1; and if the pulse values of the current code bit of all the compression codes are 0, the pulse values are not eliminated, the weight values are not modified, and meanwhile, the output value corresponding to the current code bit is 0. After comparing the current code bit of each compression code, the next time is entered to compare the pulse value of the next code bit, all the code bits are compared from high to low in sequence, B clock cycles are total, B output values are obtained, and then each output value is arranged according to the corresponding code bit to form target data.
In one embodiment, the weight is only 1 or 0. When the weight is 1, the winning is represented, namely the pulse value of the current code bit corresponding to the winning is left; when the weight is 0, it indicates that the pulse value of the current code bit corresponding to the weight is eliminated.
Exemplary, as shown in the following table:
time t 0 1 2 3 4 5
S 1 (t) 0 1 1 0 0 1
S 2 (t) 0 0 1 1 1 0
S 3 (t) 0 1 0 1 1 0
W 1 (t) 1 1 1 1 1 1
W 2 (t) 1 1 0 0 0 0
W 2 (t) 1 1 1 0 0 0
Output (t) 0 1 1 0 0 1
As shown in the above table, the three compression codes in the table are S 1 =011001、S 2 001110 and S 3 010110. At time 0 corresponding to the highest code bit of the three compression codes, W 1 (0)、W 2 (0)、W 3 (0) Are all 1, represent the most significant bits S of the 3 compression codes 1 (0)、S 2 (0)、S 3 (0) A comparison is made. Due to S 1 (0)、S 2 (0)、S 3 (0) All are 0, the output value output by the OR gate is 0, the output value corresponding to the highest bit is 0 at this time, and W is calculated 1 (1)=1、W 2 (1) 1 and W 3 (1) If 1, i.e. no compression coding is eliminated, the next time t is 1, the next highest order S of the three compression codes 1 (1)、S 2 (1)、S 3 (1) Continuing to compare; at the moment t is 1, S is generated 1 (1)=1、S 2 (1)=0、S 3 (1) When the output value of the or gate is 1, the output value corresponding to the next higher order bit is 1, and S is the same as S 1 (1)=1、S 2 (1)=0、S 3 (1) 1, so S 2 Is eliminated, its corresponding weight W 2 Modified to 0, leaving S 1 And S 3 At this time W 1 (2)=1、W 2 (2) 0 and W 3 (2) When the next time t is equal to 2, only S is present at 1 1 (2) And S 3 (2) Continue the comparison, and because of S 2 The weight is 0 when t is 2, so at the subsequent time when t > 2, S 2 The values output through the and gates are both 0. And by analogy, the non-maximum pulse value is eliminated, and the last rest is the output value of the corresponding code bit, so that the output target data of '011001' can be finally obtained.
By the method, when the maximum pooling is calculated, only calculation is needed through B clock cycles, the calculation time of the maximum pooling is greatly shortened, and the maximum pooling performed by using the SNN network is simple and efficient.
The following describes a data processing apparatus based on a spiking neural network provided in an embodiment of the present application, and the following described data processing apparatus based on a spiking neural network and the above described data processing method based on a spiking neural network may be referred to correspondingly.
In one embodiment, as shown in fig. 4, there is provided a data processing apparatus based on a spiking neural network, including:
a compression encoding module 210, configured to compress each pulse signal sequence generated by input data, and obtain each compression encoding of the input data;
and the data pooling module 220 is used for pooling each compression code through a pulse neural network to obtain target data.
The pulse signal sequence is compressed to form compression coding, so that the length of the compression coding is greatly reduced compared with that of the pulse signal sequence, and the requirement on a pulse storage space is exponentially reduced during storage. And because the length of the formed compression code is short, the calculation time can be exponentially reduced when the pooling calculation is carried out, the pooling efficiency is improved, and the processing efficiency of the data input into the SNN network model is further improved.
In an embodiment, the compression encoding module 210 is specifically configured to:
and compressing each pulse signal sequence in the form of binary codes of the number of pulses to obtain each compressed code.
In one embodiment, the number of coded bits of the compression coding is determined according to the maximum number of pulse signals in each pulse signal sequence.
In one embodiment, the data pooling module 220 is specifically configured to:
and performing maximum pooling on each compressed code through the impulse neural network to obtain target data.
In one embodiment, the data pooling module 220 is specifically configured to:
sequentially shifting each compressed code from the highest code bit of the compressed code to the lowest code bit of the compressed code to perform maximum pooling calculation, and acquiring an output value corresponding to each code bit;
and forming the target data according to the output value of each code bit.
In one embodiment, the output value of the current one of the code bits is:
Figure BDA0003569180410000111
wherein n represents the number of compression codes, t represents the time corresponding to the current code bit, and W i (t) represents the weight corresponding to said current code bit of compression encoding i at time t,
Figure BDA0003569180410000112
j≠i,S i (t) represents a pulse value on the current code bit of compression encoding i.
In an embodiment, when the current code bit is the highest code bit, the weight of the current code bit is 1.
Fig. 5 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 5: a processor (processor)810, a Communication Interface 820, a memory 830 and a Communication bus 840, wherein the processor 810, the Communication Interface 820 and the memory 830 communicate with each other via the Communication bus 840. The processor 810 may invoke computer programs in the memory 830 to perform the steps of the impulse neural network-based data processing method, including, for example:
compressing each pulse signal sequence generated by input data to obtain each compression code of the input data;
and pooling each compressed code through a pulse neural network to obtain target data.
In addition, the logic instructions in the memory 830 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present application further provides a computer program product, where the computer program product includes a computer program, the computer program may be stored on a non-transitory computer readable storage medium, and when the computer program is executed by a processor, a computer can perform the steps of the data processing method based on an impulse neural network provided in the foregoing embodiments, for example, the steps include:
compressing each pulse signal sequence generated by input data to obtain each compression code of the input data;
and pooling each compressed code through a pulse neural network to obtain target data.
On the other hand, embodiments of the present application further provide a processor-readable storage medium, where the processor-readable storage medium stores a computer program, where the computer program is configured to cause a processor to perform the steps of the method provided in each of the above embodiments, for example, including:
compressing each pulse signal sequence generated by input data to obtain each compression code of the input data;
and pooling each compressed code through a pulse neural network to obtain target data.
The processor-readable storage medium can be any available medium or data storage device that can be accessed by a processor, including, but not limited to, magnetic memory (e.g., floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc.), optical memory (e.g., CDs, DVDs, BDs, HVDs, etc.), and semiconductor memory (e.g., ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), Solid State Disks (SSDs)), etc.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A data processing method based on a pulse neural network is characterized by comprising the following steps:
compressing each pulse signal sequence generated by input data to obtain each compression code of the input data;
and pooling each compressed code through a pulse neural network to obtain target data.
2. The data processing method based on the impulse neural network as claimed in claim 1, wherein compressing each impulse signal sequence generated by the input data to obtain each compressed code of the input data comprises:
and compressing each pulse signal sequence in the form of binary codes of the number of pulses to obtain each compressed code.
3. The method of claim 2, wherein the coding bits of the compression coding are determined according to the maximum number of pulse signals in each pulse signal sequence.
4. The method according to any one of claims 1 to 3, wherein pooling each of the compressed codes through the spiking neural network to obtain the target data comprises:
and performing maximum pooling on each compressed code through the impulse neural network to obtain target data.
5. The method according to claim 4, wherein maximum pooling of each of the compressed codes through the spiking neural network to obtain target data comprises:
sequentially shifting each compressed code from the highest code bit of the compressed code to the lowest code bit of the compressed code to perform maximum pooling calculation, and acquiring an output value corresponding to each code bit;
and forming the target data according to the output value of each code bit.
6. The method according to claim 5, wherein an output value of a current one of the code bits is:
Figure FDA0003569180400000011
wherein n represents the number of compression encodingsT represents the time corresponding to the current code bit, W i (t) represents the weight corresponding to said current code bit of compression encoding i at time t,
Figure FDA0003569180400000021
S i (t) represents a pulse value on the current code bit of compression encoding i.
7. The method according to claim 6, wherein when the current code bit is the highest code bit, the current code bit has a weight of 1.
8. A data processing apparatus based on a spiking neural network, comprising:
the compression coding module is used for compressing each pulse signal sequence generated by input data to obtain each compression code of the input data;
and the data pooling module is used for pooling each compressed code through a pulse neural network to obtain target data.
9. An electronic device comprising a processor and a memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method for data processing based on a spiking neural network according to any of claims 1 to 7.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method for data processing based on a spiking neural network according to any of claims 1 to 7 when being executed by a processor.
CN202210316689.3A 2022-03-28 2022-03-28 Data processing method and device based on impulse neural network Active CN114819122B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210316689.3A CN114819122B (en) 2022-03-28 2022-03-28 Data processing method and device based on impulse neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210316689.3A CN114819122B (en) 2022-03-28 2022-03-28 Data processing method and device based on impulse neural network

Publications (2)

Publication Number Publication Date
CN114819122A true CN114819122A (en) 2022-07-29
CN114819122B CN114819122B (en) 2022-12-06

Family

ID=82530538

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210316689.3A Active CN114819122B (en) 2022-03-28 2022-03-28 Data processing method and device based on impulse neural network

Country Status (1)

Country Link
CN (1) CN114819122B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180330239A1 (en) * 2016-01-20 2018-11-15 Cambricon Technologies Corporation Limited Apparatus and method for compression coding for artificial neural network
CN109214395A (en) * 2018-08-21 2019-01-15 电子科技大学 A kind of new image representation method based on impulsive neural networks
CN110059812A (en) * 2019-01-26 2019-07-26 中国科学院计算技术研究所 Impulsive neural networks operation chip and related operation method
CN110378476A (en) * 2019-07-11 2019-10-25 中国人民解放军国防科技大学 Approximate realization method, system and medium for maximum pooling layer of pulse convolution neural network
CN113537449A (en) * 2020-04-22 2021-10-22 北京灵汐科技有限公司 Data processing method based on impulse neural network, computing core circuit and chip
CN113935457A (en) * 2021-09-10 2022-01-14 中国人民解放军军事科学院战争研究院 Pulse neural network input signal coding method based on normal distribution

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180330239A1 (en) * 2016-01-20 2018-11-15 Cambricon Technologies Corporation Limited Apparatus and method for compression coding for artificial neural network
CN109214395A (en) * 2018-08-21 2019-01-15 电子科技大学 A kind of new image representation method based on impulsive neural networks
CN110059812A (en) * 2019-01-26 2019-07-26 中国科学院计算技术研究所 Impulsive neural networks operation chip and related operation method
CN110378476A (en) * 2019-07-11 2019-10-25 中国人民解放军国防科技大学 Approximate realization method, system and medium for maximum pooling layer of pulse convolution neural network
CN113537449A (en) * 2020-04-22 2021-10-22 北京灵汐科技有限公司 Data processing method based on impulse neural network, computing core circuit and chip
CN113935457A (en) * 2021-09-10 2022-01-14 中国人民解放军军事科学院战争研究院 Pulse neural network input signal coding method based on normal distribution

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JESUS L. LOBO 等: "Evolving Spiking Neural Networks for online learning over drifting data streams", 《NEURAL NETWORKS》 *

Also Published As

Publication number Publication date
CN114819122B (en) 2022-12-06

Similar Documents

Publication Publication Date Title
EP3944505A1 (en) Data compression method and computing device
CN110119745B (en) Compression method, compression device, computer equipment and storage medium of deep learning model
CN116884391B (en) Multimode fusion audio generation method and device based on diffusion model
CN110784225A (en) Data compression method, data decompression method, related device, electronic equipment and system
WO2021135715A1 (en) Image compression method and apparatus
CN111144375B (en) Abnormal behavior detection method and device based on feature coding and electronic equipment
US20220114454A1 (en) Electronic apparatus for decompressing a compressed artificial intelligence model and control method therefor
CN111326168A (en) Voice separation method and device, electronic equipment and storage medium
CN114501011B (en) Image compression method, image decompression method and device
CN113222159B (en) Quantum state determination method and device
CN117312777A (en) Industrial equipment time sequence generation method and device based on diffusion model
CN118377869A (en) Processing method and device for self-correction fact enhancement of large language model
CN114764619B (en) Convolution operation method and device based on quantum circuit
CN110363291B (en) Operation method and device of neural network, computer equipment and storage medium
CN114819122B (en) Data processing method and device based on impulse neural network
US11615286B2 (en) Computing system and compressing method for neural network parameters
CN110135465B (en) Model parameter representation space size estimation method and device and recommendation method
CN116663626A (en) Sparse pulse neural network accelerator based on ping-pong architecture
CN114528810A (en) Data code generation method and device, electronic equipment and storage medium
CN113810058A (en) Data compression method, data decompression method, device and electronic equipment
CN117236900B (en) Individual tax data processing method and system based on flow automation
US20240013523A1 (en) Model training method and model training system
CN114611525B (en) Text diffusion method, text diffusion device, electronic equipment and storage medium
KR20220046796A (en) Electronic apparatus and control method thereof
CN114819121B (en) Signal processing device and signal processing method based on impulse neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant