WO2018149217A1 - 神经网络计算核信息处理方法、系统和计算机设备 - Google Patents

神经网络计算核信息处理方法、系统和计算机设备 Download PDF

Info

Publication number
WO2018149217A1
WO2018149217A1 PCT/CN2017/114662 CN2017114662W WO2018149217A1 WO 2018149217 A1 WO2018149217 A1 WO 2018149217A1 CN 2017114662 W CN2017114662 W CN 2017114662W WO 2018149217 A1 WO2018149217 A1 WO 2018149217A1
Authority
WO
WIPO (PCT)
Prior art keywords
neuron
multiplexing
group
information
current
Prior art date
Application number
PCT/CN2017/114662
Other languages
English (en)
French (fr)
Inventor
裴京
施路平
焦鹏
邓磊
吴臻志
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201710085540.8A external-priority patent/CN106971228B/zh
Priority claimed from CN201710085547.XA external-priority patent/CN106971229B/zh
Priority claimed from CN201710085556.9A external-priority patent/CN106971227B/zh
Application filed by 清华大学 filed Critical 清华大学
Publication of WO2018149217A1 publication Critical patent/WO2018149217A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Definitions

  • the present invention relates to the field of neural network technologies, and in particular, to a neural network computing core information processing method, system and computer device.
  • Neuromorphic engineering was proposed by Carver Mead in 1990 to simulate a biological nervous system architecture with large-scale integrated circuits and to construct a neuromorphic computing system.
  • Early neuromorphic computing systems were generally implemented by analog circuits, but in recent years digital circuits and digital-analog hybrid circuits have also been increasingly used by neuromorphic engineering.
  • neuromorphic engineering and neuromorphic circuits are one of the emerging research hotspots in the world.
  • the traditional neuromorphic computing platform is designed to simulate brain neuron models and ion channel activity through analog circuits, and to construct connections and routes using digital circuits and on-chip memory, making it easy to change the neuron connection map.
  • computational kernels are used to accomplish large-scale information processing tasks, in which the axons of neurons in computational nuclei are connected to 256 neurons at most through synapses.
  • Embodiments of the present invention provide a neural network computing core information processing method, system, and computer device, which can expand the information processing capability of a neural network.
  • a neural network computing nuclear information processing method includes:
  • the front end computing core multiplexing group comprising at least two front end computing cores
  • Each front-end computing core in the front-end computing core multiplexing group has a one-to-one correspondence with the computing cycle;
  • the neuron information output by each of the front end computing cores is respectively received.
  • the dividing the operation step into at least two operation cycles includes:
  • the arithmetic steps are equally divided into at least two arithmetic cycles.
  • the configuring a multiplexing rule of each neuron in the current computing core includes:
  • the multiplexing rules of the dendrites and cell bodies of each neuron in the current calculation kernel are respectively configured.
  • the front end calculates the neuron information output by the kernel, including:
  • the front end computes the artificial neuron information that the kernel continues to output.
  • the method before determining the step of calculating the core multiplexing group by the front end, the method further includes:
  • the information processing mode of the current computing core is determined to be a multiplexing mode, and the information processing mode further includes a non-multiplexing mode.
  • a method for transmitting neuron information comprising:
  • the neuronal fractionation group comprising at least two neurons
  • the division rule is dividing the operation step into at least two operation periods, and the number of the operation cycles is greater than or equal to the neuron a quantity, each neuron in the neuron-distributing group respectively corresponding to the operation cycle;
  • each of the neurons in the neuron division group outputs neuron information in its corresponding operation cycle in each operation cycle of the current operation step.
  • a method for receiving neuron information comprising:
  • the front-end neuron reuse group including at least two front-end neurons
  • each front-end neuron in the front-end neuron multiplexing group is respectively corresponding to the operation cycle;
  • the neuron information output by each of the front-end neurons is respectively received.
  • a computer apparatus comprising a memory, a processor, and a computer program stored on the memory and operative on the processor, the processor executing the computer program to implement the steps of the method of any of the above embodiments.
  • a neural network computing nuclear information processing system comprising:
  • a multiplexing group determining module configured to determine a front end computing core multiplexing group, where the front end computing core multiplexing group includes at least two front end computing cores;
  • a calculation cycle allocation module configured to calculate a core multiplexing group according to the front end, and configure a multiplexing rule of the current computing core, where the multiplexing rule is to divide the operation step into at least two operation cycles, and the number of the operation cycles And greater than or equal to the number of the front-end computing cores, and each front-end computing core in the front-end computing core multiplexing group is respectively corresponding to the computing cycle;
  • the neuron information receiving module is configured to receive, according to the multiplexing rule, the neuron information output by each of the front end computing cores in the current operation step.
  • a neuron information sending system includes:
  • a classification group determining module for determining a neuron fractionation group, the neuron fractionation group comprising at least two neurons
  • a calculation period allocation module configured to configure a current neuron division rule according to the neuron division group, the division rule is to divide the operation step into at least two operation periods, and the number of the operation cycles is greater than Or equal to the number of the neurons, and each neuron in the neuron-distributing group respectively corresponds to the operation cycle;
  • a neuron information sending module configured to output, according to the dividing rule, each of the neurons in the neuron-using group in each computing cycle of the current computing step information.
  • a neuron information receiving system includes:
  • a multiplexing group determining module configured to determine a front end neuron multiplexing group, where the front end neuron multiplexing group includes at least two front end neurons;
  • a calculation cycle allocation module configured to configure a multiplexing rule of the current neuron according to the front-end neuron multiplexing group, where the multiplexing rule is to divide the operation step into at least two operation cycles, and the number of the operation cycles And greater than or equal to the number of the front-end neurons, and each front-end neuron in the front-end neuron multiplexing group is respectively in one-to-one correspondence with the operation cycle;
  • the neuron information receiving module is configured to receive, according to the multiplexing rule, the neuron information output by each of the front-end neurons in the current operation step.
  • the above-mentioned neural network computing nuclear information processing method, system and computer equipment by setting a front end computing core multiplexing group, so that the current computing core receives different front ends in each computing cycle of the current computing step according to the set multiplexing rule. Calculating the neuron information sent by the core, so that the current computing core can receive more information transmitted by the front-end computing core within the duration of the current computing step, thereby improving the ability to calculate the nuclear information receiving, thereby improving the information processing of the entire neural network. ability.
  • FIG. 1 is a schematic flow chart of a neural network computing nuclear information processing method according to an embodiment
  • FIG. 2 is a schematic flow chart of a neural network computing core information processing method according to another embodiment
  • FIG. 3 is a schematic structural diagram of a neural network computing nuclear information processing system according to an embodiment
  • FIG. 4 is a schematic structural diagram of a neural network computing core information processing system according to another embodiment
  • FIG. 5 is a schematic diagram of a neural network computing core information processing method according to another embodiment
  • FIG. 6 is a schematic flow chart of a method for transmitting neuron information according to an embodiment
  • FIG. 7 is a schematic flow chart of a method for transmitting neuron information according to another embodiment
  • FIG. 8 is a schematic structural diagram of a neuron information transmitting system according to an embodiment
  • Fig. 9 is a block diagram showing the structure of a neuron information transmitting system of another embodiment.
  • FIG. 10 is a schematic flowchart diagram of a method for receiving neuron information according to an embodiment
  • FIG. 11 is a schematic flow chart of a method for receiving neuron information according to another embodiment
  • FIG. 12 is a schematic structural diagram of a neuron information receiving system of an embodiment
  • FIG. 13 is a schematic structural diagram of a neuron information receiving system according to another embodiment.
  • FIG. 14 is a schematic diagram of a method of receiving neuron information according to another embodiment.
  • FIG. 1 is a schematic flowchart of a neural network computing nuclear information processing method according to an embodiment, and the neural network computing nuclear information processing method shown in FIG. 1 includes:
  • Step S100 Determine a front end computing core multiplexing group, where the front end computing core multiplexing group includes at least two front end computing cores.
  • the front end computing core is multiplexed in one operation step, and the front end computing core to be multiplexed needs to be determined.
  • the number and range can be flexibly set according to the requirements of the tasks performed by the neural network, and any number of computing cores can be flexibly multiplexed. As long as the multiplexing is used, the computing core can be used for sending information for a sufficient period of time.
  • the operation step (STEP) is a fixed duration for information processing of the computational core, and all neurons in the neural network process the data synchronously according to the operation step.
  • Step S200 calculating a multiplexing rule of the current computing core according to the front end computing core multiplexing group, where the multiplexing rule is to divide the operation step into at least two operation cycles, and the number of the operation cycles is greater than or equal to
  • the front end computing cores are respectively associated with the computing cycles in the front end computing core multiplexing group.
  • the operation step is divided into at least two operation cycles, that is, a STEP is divided into a plurality of operation cycles (also referred to as PHASE), and the front-end computing core for ensuring multiplexing can be described as described above.
  • the calculation cycle corresponds, and the number of calculation cycles needs to be set to be greater than or equal to the number of the front end calculation cores to be multiplexed.
  • the one-to-one correspondence between the front-end computing cores in the front-end computing core multiplexing group and the computing cycle respectively means that the information sent by the front-end computing core is transmitted only in one computing cycle corresponding thereto.
  • a front-end computing core can also be associated with multiple computing cycles, or one computing cycle can be performed with multiple front-end computing cores.
  • Correspondence thereby further improving the information receiving capability of the current computing core, but the basic principle is the same as the front-end computing core and the computing cycle, so it will not be described again, and the actual setting can be flexibly set according to requirements.
  • Step S300 receive, in the current operation step, the neuron information output by each of the front end computing cores.
  • the front end computing core multiplexing group is set, so that the current computing core receives the neuron information sent by different front end computing cores in each computing cycle of the current computing step according to the set multiplexing rule.
  • the ability to calculate the nuclear information reception is improved, thereby improving the information processing capability of the entire neural network.
  • the dividing the operation step into at least two operation cycles includes dividing the operation steps into equal intervals into at least two operation cycles.
  • the operation step may be divided into non-equal intervals, if some operation cycles are long, and some operation cycles are short, so that the front end calculation core of the output neuron information has a large amount of information, corresponding to relatively high Long calculation cycle. Thereby ensuring the reception integrity of the neuron information.
  • the allocation of the length of the calculation cycle is flexibly set according to the demand.
  • the current calculation core can receive the neuron information sent by the different front end calculation cores according to the set time interval, without going to the operation cycle.
  • the measurement of the duration is simpler and more reliable, and the information processing efficiency of the neural network is improved.
  • the configuring a multiplexing rule of each neuron in the current calculation core includes separately configuring a multiplexing rule of a dendrite and a cell body of each neuron in the current calculation kernel.
  • the dendrite of each neuron in the current calculation core is used to receive information sent by the front-end neuron
  • the cell body of each neuron in the current calculation core is used to calculate the information received by the dendritic.
  • the dendrites and the cell bodies of each neuron in the computing kernel are respectively configured with corresponding multiplexing rules, such as which PHASEs of the current STEP are used for the dendritic reception of the front-end neuron output.
  • At least one PHASE is reserved after the corresponding PHASE of all dendrites and cell bodies. Used for current neuron calculations.
  • the information processing of the current computing core can be made more efficient by separately configuring the multiplexing rules of the dendrites and the cell bodies of the neurons in the current calculation kernel.
  • the front end calculates the neuron information output by the kernel, including the artificial neuron information continuously output by the front end computing kernel.
  • the front-end computing core adopts the traditional non-multiplexed transmission mode, the core is calculated as an artificial neuron in the front end, and the transmission mode is continuous transmission.
  • the received front end computing core output neuron information is continuous output artificial neuron information
  • the current computing core processing may send the neuron information sent by the front end computing core according to a conventional information transmission manner.
  • FIG. 2 is a schematic flowchart of a neural network computing nuclear information processing method according to another embodiment, and the neural network computing nuclear information processing method shown in FIG. 2 includes:
  • Step S90 determining that the information processing mode of the current computing core is a multiplexing mode, and the information processing mode further includes a non-multiplexing mode.
  • the current computing core may choose to work in a multiplexing mode or may operate in a non-multiplexing mode, which is a working mode in the conventional technology.
  • Step S100 Determine a front end computing core multiplexing group, where the front end computing core multiplexing group includes at least two front end computing cores.
  • Step S200 calculating a multiplexing rule of the current computing core according to the front end computing core multiplexing group, where the multiplexing rule is to divide the operation step into at least two operation cycles, and the number of the operation cycles is greater than or equal to
  • the front end computing cores are respectively associated with the computing cycles in the front end computing core multiplexing group.
  • Step S300 receive, in the current operation step, the neuron information output by each of the front end computing cores.
  • the information processing mode is provided, so that the current computing core selection works in the multiplexing mode, is compatible with the traditional neural information processing mode, and improves the overall information processing capability of the neural network.
  • FIG. 3 is a schematic structural diagram of a neural network computing nuclear information processing system according to an embodiment, and the neural network computing nuclear information processing system shown in FIG. 3 includes:
  • the multiplexing group determining module 100 is configured to determine a front end computing core multiplexing group, where the front end computing core multiplexing group includes at least two front end computing cores.
  • the operation cycle allocating module 200 is configured to calculate a multiplexing rule of the current computing core according to the front end computing core multiplexing group, where the multiplexing rule is to divide the operation step into at least two operation cycles, and the operation cycle is
  • the number of the front-end computing cores in the front-end computing core multiplexing group is greater than or equal to the number of the front-end computing cores, and the front-end computing cores in the front-end computing core multiplexing group are respectively one by one correspond. It is used to divide the operation steps into at least two operation periods. It is used to separately configure the multiplexing rules of the dendrites and cell bodies of each neuron in the current calculation kernel.
  • the neuron information receiving module 300 is configured to receive, according to the multiplexing rule, the neuron information output by each of the front end computing cores in the current operation step.
  • the artificial neuron information for receiving the continuous output of the front end computing core is configured to receive, according to the multiplexing rule, the neuron information output by each of the front end computing cores in the current operation step.
  • the front end computing core multiplexing group is set, so that the current computing core receives the neuron information sent by different front end computing cores in each computing cycle of the current computing step according to the set multiplexing rule.
  • the ability to calculate the nuclear information reception is improved, thereby improving the information processing capability of the entire neural network.
  • the current calculation core can receive the neuron information sent by different front-end computing cores according to the set time interval, without performing the measurement of the duration of the operation cycle. The method is simpler and more reliable, and improves the information processing efficiency of the neural network.
  • the received front end calculates the neuron information output by the core as the artificial neuron information continuously output, and can enable the current computing core processing to send the neuron information sent by the front end computing core according to the traditional information transmission manner.
  • FIG. 4 is a schematic structural diagram of a neural network computing nuclear information processing system according to another embodiment, and the neural network computing nuclear information processing system shown in FIG. 4 includes:
  • the processing mode determining module 90 is configured to determine that the information processing mode of the current computing core is a multiplexing mode, and the information processing mode further includes a non-multiplexing mode.
  • the multiplexing group determining module 100 is configured to determine a front end computing core multiplexing group, where the front end computing core multiplexing group includes at least two front end computing cores.
  • the operation cycle allocating module 200 is configured to calculate a multiplexing rule of the current computing core according to the front end computing core multiplexing group, where the multiplexing rule is to divide the operation step into at least two operation cycles, and the operation cycle is The number of the front-end computing cores is greater than or equal to the number of the front-end computing cores, and each front-end computing core in the front-end computing core multiplexing group is respectively in one-to-one correspondence with the computing cycle.
  • the neuron information receiving module 300 is configured to receive, according to the multiplexing rule, the neuron information output by each of the front end computing cores in the current operation step.
  • the information processing mode is provided, so that the current computing core selection works in the multiplexing mode, is compatible with the traditional neural information processing mode, and improves the overall information processing capability of the neural network.
  • the multiplexing of the current computing core can be implemented by means of a register, as shown in Table 1:
  • FIG. 5 is a schematic diagram of the present embodiment given in conjunction with Table 1, which shows one of the implementations of the registers for multiplexing the dendrites and cell bodies of neurons in the current computational kernel, where D_type, which identifies dendrites
  • D_type which identifies dendrites
  • the selection of the processing mode, when it is 0, is the existing processing mode, and is not multiplexed.
  • the dendrite of the current neuron adopts the multiplexing mode.
  • the bit width is 1, indicating that the variable is described by a byte of 1 bit.
  • D_start_phase is the initial effective operation period for the dendritic calculation
  • D_end_phase is the last valid operation period for the dendrite calculation. The two are used together to indicate the position of the multiplexed operation cycle in the register.
  • the cell bodies in the latter half of Table 1 are identical to the dendritic portions.
  • FIG. 6 is a schematic flowchart of a method for transmitting a neuron information according to an embodiment, and a method for transmitting a neuron information as shown in FIG.
  • step S110 a neuron fractionation group is determined, and the neuron fractionation group includes at least two neurons.
  • the operation step (STEP) is a fixed duration of information processing for the neurons, and all the neurons in the neural network process the data synchronously according to the operation steps.
  • Step S120 configuring a division rule of the current neuron according to the neuron division group, the division rule is to divide the operation step into at least two operation periods, and the number of the operation cycles is greater than or equal to the The number of neurons corresponds to each of the neurons in the neuron division group respectively.
  • the dividing the operation step into at least two operation cycles that is, dividing one STEP into multiple operation weeks Period (also referred to as PHASE), in order to ensure that the divided neurons can correspond to the operation cycle, and the number of calculation cycles needs to be set to be greater than or equal to the front-end neuron to be used for classification. quantity.
  • each of the neurons in the neuron division group corresponds to the calculation cycle, and the information transmitted by the front-end neuron is transmitted in a calculation cycle corresponding thereto.
  • Step S130 according to the division rule, in each operation cycle of the current operation step, each of the neurons in the neuron division group respectively output neuron information in its corresponding operation cycle.
  • the neuron information corresponding to the current operation cycle may be sent in the corresponding operation cycle in the current operation step.
  • the current neuron by setting the neuron division group, the current neuron sends the neuron information in order according to the set division rule in the current operation step according to the set operation cycle, so that the current neuron information is Within the duration of the operation step, the neuron can send more information, improve the ability of the neuron information to be transmitted, thereby improving the information processing capability of the entire neural network.
  • the dividing the operation step into at least two operation cycles includes dividing the operation steps into equal intervals into at least two operation cycles.
  • the operation step may be divided into non-equal intervals, if some operation cycles are long, and some operation cycles are short, so that the neuron information of the output neuron information is large, corresponding to a relatively long The operation cycle. Thereby ensuring the integrity of the transmission of neuronal information.
  • the allocation of the length of the calculation cycle is flexibly set according to the demand.
  • the current neurons can transmit the neuron information according to the set time interval, without having to measure the duration of the operation cycle. It is simpler and more reliable, and improves the information processing efficiency of the neural network.
  • the each neuron in the neuron-distributing group corresponds to the operation cycle, respectively, and includes:
  • One of the neurons in the neuron division group corresponds to one of the computation cycles, or
  • One neuron in the neuron division group corresponds to a plurality of the operation cycles, and the one operation cycle corresponds to only one of the neurons.
  • the one-to-one correspondence between the neurons and the operation cycle, or the manner in which one neuron corresponds to multiple operation cycles, can ensure the integrity of the output information of the neurons and make the division of the neurons more flexible.
  • the neurons may have a one-to-one correspondence with the operation cycle, or one neuron may correspond to a plurality of operation cycles, so that the neurons with the large amount of information of the transmitted neuron information may have a sufficient operation cycle. Time to send information, Ensure the integrity of neuron information transmission.
  • the outputting the neuron information in its corresponding operation period comprises: outputting artificial neuron information or pulsed neuron information.
  • the output of artificial neuron information and the output of pulsed neuron information supports the output of artificial neuron information and the output of pulsed neuron information, and is compatible with artificial neural networks and pulsed neural networks.
  • the artificial neuron information can be output, and the pulsed neuron information can also be output, which improves the information transmission capability of the artificial neural network and the pulsed neural network.
  • FIG. 7 is a schematic flowchart of a method for transmitting a neuron information according to another embodiment, and a method for transmitting a neuron information, as shown in FIG. 7, includes:
  • Step S101 determining that the information processing mode of the current neuron is a demultiplexing mode, and the information processing mode further includes a non-division mode.
  • the current neuron may choose to work in the demultiplexing mode or may operate in the non-division mode, and the non-distribution mode is the working mode in the conventional technology.
  • step S110 a neuron fractionation group is determined, and the neuron fractionation group includes at least two neurons.
  • Step S120 configuring a division rule of the current neuron according to the neuron division group, the division rule is to divide the operation step into at least two operation periods, and the number of the operation cycles is greater than or equal to the The number of neurons corresponds to each of the neurons in the neuron division group respectively.
  • Step S130 according to the division rule, in each operation cycle of the current operation step, each of the neurons in the neuron division group respectively output neuron information in its corresponding operation cycle.
  • the information processing mode is provided to enable the current neuron selection to work in the demultiplexing mode, compatible with the traditional neural information processing mode, and improve the overall information processing capability of the neural network.
  • FIG. 8 is a schematic structural diagram of a neuron information transmitting system according to an embodiment, and the neuron information transmitting system shown in FIG. 8 includes:
  • the sub-group determination module 110 is configured to determine a neuron fractionation group, the neuron fractionation group comprising at least two neurons.
  • the operation period allocation module 120 is configured to configure a division rule of the current neuron according to the neuron division group, where the division rule is to divide the operation step into at least two operation periods, and the number of the operation cycles Greater than or equal to the number of neurons, each neuron in the neuron-distributing group corresponds to the operation cycle. It is used to divide the operation steps into at least two operation periods. Or corresponding to one of the neurons in the group of neurons to be associated with one of the operation cycles, or one of the neurons in the group of neurons to be associated with a plurality of the operation cycles, and the one operation cycle Only one of the neurons is associated.
  • the neuron information sending module 130 is configured to output, according to the dividing rule, each of the neurons in the neuron-using group in each computing cycle of the current computing step Meta information. Used to output artificial neuron information or pulsed neuron information.
  • the current neuron by setting the neuron division group, the current neuron sends the neuron information in order according to the set division rule in the current operation step according to the set operation cycle, so that the current neuron information is Within the duration of the operation step, the neuron can send more information, improve the ability of the neuron information to be transmitted, thereby improving the information processing capability of the entire neural network.
  • the current neurons By dividing the operation steps into intervals of operation cycles, the current neurons can transmit the neuron information at a set time interval without having to measure the duration of the operation cycle, which is simpler and more reliable, and improves the implementation. Information processing efficiency of neural networks.
  • the neurons may be in one-to-one correspondence with the operation cycle, or one neuron may correspond to a plurality of operation cycles, so that the neurons with large information amount of the transmitted neuron information may have sufficient operation period to send information to ensure the nerves.
  • the artificial neuron information can be output, and the pulse neuron information can also be output, which improves the information transmission capability of the artificial neural network and the pulsed neural network.
  • FIG. 9 is a schematic structural diagram of a neuron information transmitting system according to another embodiment, and the neuron information transmitting system shown in FIG. 9 includes:
  • the processing mode determining module 101 is configured to determine that the information processing mode of the current neuron is a demultiplexing mode, and the information processing mode further includes a non-division mode.
  • the sub-group determination module 110 is configured to determine a neuron fractionation group, the neuron fractionation group comprising at least two neurons.
  • the operation period allocation module 120 is configured to configure a division rule of the current neuron according to the neuron division group, where the division rule is to divide the operation step into at least two operation periods, and the number of the operation cycles Greater than or equal to the number of neurons, each neuron in the neuron-distributing group corresponds to the operation cycle.
  • the neuron information sending module 130 is configured to output, according to the dividing rule, each of the neurons in the neuron-using group in each computing cycle of the current computing step Meta information.
  • the information processing mode is provided to enable the current neuron selection to work in the demultiplexing mode, compatible with the traditional neural information processing mode, and improve the overall information processing capability of the neural network.
  • FIG. 10 is a schematic flowchart of a method for receiving neuron information according to an embodiment
  • FIG. 10 is a method for receiving neuron information according to an embodiment, including:
  • Step S210 determining a front-end neuron reuse group, where the front-end neuron reuse group includes at least two front-end neurons.
  • the operation step (STEP) is a fixed duration of information processing for the neurons, and all the neurons in the neural network process the data synchronously according to the operation steps.
  • Step S220 configuring a multiplexing rule of the current neuron according to the front-end neuron multiplexing group, where the multiplexing rule is to divide the operation step into at least two operation cycles, and the number of the operation cycles is greater than or equal to The number of front-end neurons is one-to-one corresponding to each of the front-end neurons in the front-end neuron multiplexing group.
  • the operation step is divided into at least two operation cycles, that is, one STEP is divided into a plurality of operation cycles (which may also be PHASE), and the front-end neurons that are guaranteed to be multiplexed can be operated with the operation.
  • the period it is necessary to set the number of operation cycles to be greater than or equal to the number of the front-end neurons to be multiplexed.
  • each front-end neuron in the front-end neuron multiplexing group and the operation cycle respectively means that the information sent by the front-end neuron is transmitted only in one computing cycle corresponding thereto.
  • one front-end neuron may be associated with a plurality of computation cycles, or one computation cycle may correspond to a plurality of front-end neurons, thereby further improving the information reception capability of the current neuron, but
  • the basic principle is the same as the front-end neuron and the operation cycle, so it will not be described again. In actual use, it can be flexibly set according to the requirements.
  • Step S230 receive the neuron information output by each of the front-end neurons in the current operation step.
  • the current neuron receives the neuron information sent by different front-end neurons in each operation cycle of the current operation step according to the set multiplexing rule.
  • the ability of the neuron information to be received is improved, thereby improving the information processing capability of the entire neural network.
  • the dividing the operation step into at least two operation cycles includes dividing the operation steps into equal intervals into at least two operation cycles.
  • the operation step may be divided into non-equal intervals, such as a long operation period and a short operation period, so that the front-end neuron with a large amount of information of the output neuron information corresponds to the relative Long calculation cycle. Thereby ensuring the reception integrity of the neuron information.
  • the allocation of the length of the calculation cycle is flexibly set according to the demand.
  • the current neurons can receive the neuron information sent by different front-end neurons according to the set time interval, without having to perform the operation.
  • the measurement of the duration of the cycle is simpler and more reliable, and improves the information processing efficiency of the neural network.
  • the multiplexing rules of the current neurons are configured to include multiplexing rules of dendrites and cell bodies of the current neurons, respectively.
  • the dendrite of the current neuron is used to receive information sent by the front-end neuron
  • the cell body of the current neuron is used to calculate information received by the dendrite.
  • the dendrites and the cell bodies are respectively configured with corresponding multiplexing rules, such as which PHASEs of the current STEP are used for dendritic reception of neuron information of the front-end neuron output, the cell body is Which PHASE of the current STEP performs the processing of the historical membrane potential information or the like, and since the processed information does not conflict, the corresponding PHASE of the specified cell body may coincide with the PHASE corresponding to the dendrite.
  • At least one PHASE is reserved after the corresponding PHASE of all dendrites and cell bodies. Used for current neuron calculations.
  • the information processing of the current neuron can be made more efficient by separately configuring the multiplexing rules of the dendrites and the cell bodies of the current neuron.
  • the neuron information output by the front-end neuron includes artificial neuron information continuously output by the front-end neuron.
  • the front-end neuron when the current neuron performs multiplexing when receiving information, if the front-end neuron adopts the traditional non-multiplexed transmission mode, the front-end neuron needs to be an artificial neuron, and the transmission mode is continuous transmission.
  • the neuron information output by the received front-end neuron is continuous output artificial neuron information
  • the current neuron processing can transmit the neuron information sent by the front-end neuron according to a conventional information transmission manner.
  • FIG. 11 is a schematic flowchart diagram of a method for receiving neuron information according to another embodiment, and FIG. 11 is a method for receiving neuron information according to an embodiment, including:
  • Step S201 determining that the information processing mode of the current neuron is a multiplexing mode, and the information processing mode further includes a non-multiplexing mode.
  • the current neuron may choose to work in a multiplexing mode or may operate in a non-multiplexing mode, which is a working mode in the conventional technology.
  • Step S210 determining a front-end neuron reuse group, where the front-end neuron reuse group includes at least two front-end neurons.
  • Step S220 configuring a multiplexing rule of the current neuron according to the front-end neuron multiplexing group, where the multiplexing rule is Dividing the operation step into at least two operation cycles, and the number of the operation cycles is greater than or equal to the number of the front-end neurons, and each front-end neuron in the front-end neuron multiplexing group and the operation cycle respectively One-to-one correspondence.
  • the multiplexing rule is Dividing the operation step into at least two operation cycles, and the number of the operation cycles is greater than or equal to the number of the front-end neurons, and each front-end neuron in the front-end neuron multiplexing group and the operation cycle respectively One-to-one correspondence.
  • Step S230 receive the neuron information output by each of the front-end neurons in the current operation step.
  • the provided information processing mode can make the current neuron selection work in the multiplexing mode, is compatible with the traditional neural information processing mode, and improves the overall information processing capability of the neural network.
  • FIG. 12 is a schematic structural diagram of a neuron information receiving system according to an embodiment
  • FIG. 12 is a neuron information receiving system according to an embodiment, including:
  • the multiplexing group determining module 210 is configured to determine a front-end neuron multiplexing group, where the front-end neuron multiplexing group includes at least two front-end neurons.
  • the operation cycle allocating module 220 is configured to configure a multiplexing rule of the current neuron according to the front-end neuron multiplexing group, where the multiplexing rule is to divide the operation step into at least two operation cycles, and the operation cycle is The number of the front-end neurons is greater than or equal to the number of the front-end neurons, and each front-end neuron in the front-end neuron multiplexing group is respectively corresponding to the operation cycle; and is used to divide the operation steps into at least two operations. cycle. It is also used to separately configure the multiplexing rules of the dendrites and cell bodies of the current neurons.
  • the neuron information receiving module 230 is configured to receive the neuron information output by each of the front-end neurons in the current operation step according to the multiplexing rule. And configured to receive artificial neuron information continuously output by the front end neuron.
  • the current neuron receives the neuron information sent by different front-end neurons in each operation cycle of the current operation step according to the set multiplexing rule.
  • the ability of the neuron information to be received is improved, thereby improving the information processing capability of the entire neural network.
  • the current neurons can receive the neuron information sent by different front-end neurons according to the set time interval, without having to measure the duration of the operation cycle. The method is simpler and more reliable, and improves the information processing efficiency of the neural network.
  • the received neuron information of the front-end neuron is continuous output of artificial neuron information, and the current neuron processing can transmit the neuron information sent by the front-end neuron according to a conventional information transmission manner.
  • FIG. 13 is a schematic structural diagram of a neuron information receiving system according to another embodiment, and FIG. 13 is a neuron information receiving system according to an embodiment, including:
  • the processing mode determining module 201 is configured to determine that the information processing mode of the current neuron is a multiplexing mode, where the information is The mode also includes a non-multiplexed mode.
  • the multiplexing group determining module 210 is configured to determine a front-end neuron multiplexing group, where the front-end neuron multiplexing group includes at least two front-end neurons.
  • the operation cycle allocating module 220 is configured to configure a multiplexing rule of the current neuron according to the front-end neuron multiplexing group, where the multiplexing rule is to divide the operation step into at least two operation cycles, and the operation cycle is The number of the front-end neurons is greater than or equal to the number of the front-end neurons, and each front-end neuron in the front-end neuron multiplexing group is respectively corresponding to the operation cycle; and is used to divide the operation steps into at least two operations. cycle. It is also used to separately configure the multiplexing rules of the dendrites and cell bodies of the current neurons.
  • the neuron information receiving module 230 is configured to receive the neuron information output by each of the front-end neurons in the current operation step according to the multiplexing rule. And configured to receive artificial neuron information continuously output by the front end neuron.
  • the provided information processing mode can make the current neuron selection work in the multiplexing mode, is compatible with the traditional neural information processing mode, and improves the overall information processing capability of the neural network.
  • the multiplexing of the current neurons can be implemented by means of a register, as shown in Table 1:
  • FIG 14 is a schematic diagram of the present embodiment given in conjunction with Table 1, which shows one of the implementations of the registers of the dendrites and cell bodies of the current neuron, where D_type, which identifies the processing mode of the dendrites When it is 0, it is an existing processing mode that does not perform multiplexing.
  • Each dendrite receives a front-end neuron information according to a STEP, and does not perform multiplexing. When it is 1, the current neuron The dendrite uses a multiplexing mode.
  • the bit width is 1, indicating that the variable is described by a byte of 1 bit.
  • D_start_phase is the initial effective operation period for the dendritic calculation
  • D_end_phase is the dendritic calculation.
  • the last valid operation cycle, used in conjunction with the register is used to indicate the location of the multiplexed computation cycle in the register.
  • the cell bodies in the latter half of Table 1 are identical to the dendritic portions.
  • an embodiment of the present invention further provides a computer device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer
  • a computer device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer
  • Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in a variety of formats, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization chain.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • Synchlink DRAM SLDRAM
  • Memory Bus Radbus
  • RDRAM Direct RAM
  • DRAM Direct Memory Bus Dynamic RAM
  • RDRAM Memory Bus Dynamic RAM

Abstract

一种神经网络计算核信息处理方法、系统及计算机设备,所述方法包括:确定前端计算核复用组,所述前端计算核复用组包括至少两个前端计算核(S100);根据所述前端计算核复用组,配置当前计算核的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端计算核的数量,将所述前端计算核复用组中的各前端计算核元分别与所述运算周期一一对应(S200);根据所述复用规则,在当前运算步,分别接收各所述前端计算核输出的神经元信息(S300)。所述方法使当前计算核在当前运算步的时长内,能够接收更多的前端计算核发送的信息,提高了计算核信息接收的能力,从而提高整个神经网络的信息处理能力。

Description

神经网络计算核信息处理方法、系统和计算机设备
相关申请
本申请要求2017年02月17日申请的,申请号为201710085556.9,名称为“神经元信息接收方法和系统”;2017年02月17日申请的,申请号为201710085540.8,名称为“神经元信息接收方法和系统”;2017年02月17日申请的,以及申请号为201710085547.X,名称为“神经网络计算核信息处理方法和系统”的中国专利申请的优先权,在此将其全文引入作为参考。
技术领域
本发明涉及神经网络技术领域,特别涉及神经网络计算核信息处理方法、系统和计算机设备。
背景技术
神经形态工程由Carver Mead在1990年提出,意在用大规模集成电路来模拟生物神经系统架构,构建神经形态计算系统。早期的神经形态计算系统一般通过模拟电路实现,但近些年来数字电路和数模混合电路也越来越多的被神经形态工程所使用。目前,神经形态工程与神经形态电路是国际上新兴的研究热点之一。传统的神经形态计算平台,旨在通过模拟电路仿真大脑神经元模型和离子通道活动,使用数字电路与片上存储构建连接和路由,从而能十分方便更改神经元连接图谱。
传统的神经网络中,采用计算核的方式完成大规模的信息处理任务,其中,计算核内神经元的轴突最多通过突触连接到256个神经元。在承载神经网络运算时,这限制了神经网络每一层的输出都不能大于256,即下一层的神经元数不能超过256,即在传统的神经网络中,神经元之间的连接限制,极大的限制了神经网络的信息处理能力。
发明内容
本发明实施例提供一种神经网络计算核信息处理方法、系统和计算机设备,可以扩展神经网络的信息处理能力。
一种神经网络计算核信息处理方法,包括:
确定前端计算核复用组,所述前端计算核复用组包括至少两个前端计算核;
根据所述前端计算核复用组,配置当前计算核的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端计算核的数量,将所述 前端计算核复用组中的各前端计算核元分别与所述运算周期一一对应;
根据所述复用规则,在当前运算步,分别接收各所述前端计算核输出的神经元信息。
在其中一个实施例中,所述将运算步划分为至少两个运算周期,包括:
将运算步等间隔划分为至少两个运算周期。
在其中一个实施例中,所述配置当前计算核内各神经元的复用规则,包括:
分别配置当前计算核内各神经元的树突和胞体的复用规则。
在其中一个实施例中,所述前端计算核输出的神经元信息,包括:
前端计算核持续输出的人工神经元信息。
在其中一个实施例中,在确定前端计算核复用组的步骤之前,所述方法还包括:
确定当前计算核的信息处理模式为复用模式,所述信息处理模式还包括非复用模式。
一种神经元信息发送方法,包括:
确定神经元分用组,所述神经元分用组包括至少两个神经元;
根据所述神经元分用组,配置当前神经元的分用规则,所述分用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述神经元的数量,将所述神经元分用组中的各神经元分别与所述运算周期对应;
根据所述分用规则,在当前运算步的各运算周期内,所述神经元分用组中的各所述神经元分别在其对应的运算周期内输出神经元信息。
一种神经元信息接收方法,包括:
确定前端神经元复用组,所述前端神经元复用组包括至少两个前端神经元;
根据所述前端神经元复用组,配置当前神经元的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端神经元的数量,将所述前端神经元复用组中的各前端神经元分别与所述运算周期一一对应;
根据所述复用规则,在当前运算步内,分别接收各所述前端神经元输出的神经元信息。
一种计算机设备,包括存储器、处理器,及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现以上任意实施例中所述方法的步骤。
一种神经网络计算核信息处理系统,包括:
复用组确定模块,用于确定前端计算核复用组,所述前端计算核复用组包括至少两个前端计算核;
运算周期分配模块,用于根据所述前端计算核复用组,配置当前计算核的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端计算核的数量,将所述前端计算核复用组中的各前端计算核元分别与所述运算周期一一对应;
神经元信息接收模块,用于根据所述复用规则,在当前运算步,分别接收各所述前端计算核输出的神经元信息。
一种神经元信息发送系统,包括:
分用组确定模块,用于确定神经元分用组,所述神经元分用组包括至少两个神经元;
运算周期分配模块,用于根据所述神经元分用组,配置当前神经元的分用规则,所述分用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述神经元的数量,将所述神经元分用组中的各神经元分别与所述运算周期对应;
神经元信息发送模块,用于根据所述分用规则,在当前运算步的各运算周期内,所述神经元分用组中的各所述神经元分别在其对应的运算周期内输出神经元信息。
一种神经元信息接收系统,包括:
复用组确定模块,用于确定前端神经元复用组,所述前端神经元复用组包括至少两个前端神经元;
运算周期分配模块,用于根据所述前端神经元复用组,配置当前神经元的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端神经元的数量,将所述前端神经元复用组中的各前端神经元分别与所述运算周期一一对应;
神经元信息接收模块,用于根据所述复用规则,在当前运算步内,分别接收各所述前端神经元输出的神经元信息。
上述神经网络计算核信息处理方法、系统和计算机设备,通过设置前端计算核复用组,使得当前计算核按照设定好的复用规则,在当前运算步的各运算周期,分别接收不同的前端计算核发送的神经元信息,以使当前计算核在当前运算步的时长内,能够接收更多的前端计算核发送的信息,提高了计算核信息接收的能力,从而提高整个神经网络的信息处理能力。
附图说明
为了清楚地说明本申请的技术方案,下面将对实施例中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例的神经网络计算核信息处理方法的流程示意图;
图2为另一个实施例的神经网络计算核信息处理方法的流程示意图;
图3为一个实施例的神经网络计算核信息处理系统的结构示意图;
图4为另一个实施例的神经网络计算核信息处理系统的结构示意图;
图5为另一个实施例的神经网络计算核信息处理方法的示意图;
图6为一个实施例的神经元信息发送方法的流程示意图;
图7为另一个实施例的神经元信息发送方法的流程示意图;
图8为一个实施例的神经元信息发送系统的结构示意图;
图9为另一个实施例的神经元信息发送系统的结构示意图。
图10为一个实施例的神经元信息接收方法的流程示意图;
图11为另一个实施例的神经元信息接收方法的流程示意图;
图12为一个实施例的神经元信息接收系统的结构示意图;
图13为另一个实施例的神经元信息接收系统的结构示意图;
图14为另一个实施例的神经元信息接收方法的示意图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
图1为一个实施例的神经网络计算核信息处理方法的流程示意图,如图1所示的神经网络计算核信息处理方法,包括:
步骤S100,确定前端计算核复用组,所述前端计算核复用组包括至少两个前端计算核。
具体的,为使所述当前计算核在一个运算步内能够接收更多的前端计算核输入的信息,将前端计算核在一个运算步内进行复用,需要确定进行复用的前端计算核的数量和范围,可以根据神经网络所执行的任务的需求,灵活设定任意数量的计算核进行复用,只要复用后,计算核用于发送信息的时长够用即可。
所述运算步(STEP),为计算核进行信息处理的一个固定的时长,神经网络中所有的神经元,均按照所述的运算步同步处理数据。
步骤S200,根据所述前端计算核复用组,配置当前计算核的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端计算核的数量,将所述前端计算核复用组中的各前端计算核元分别与所述运算周期一一对应。
具体的,所述将运算步划分为至少两个运算周期,即,将一个STEP划分为多个运算周期(也可称为PHASE),为保证进行复用的前端计算核都能与所述的运算周期进行对应,需要将运算周期的个数设定为大于或等于进行复用的所述前端计算核的数量。
所述将所述前端计算核复用组中的各前端计算核分别与所述运算周期一一对应,是指将前端计算核发送的信息,只在一个与之对应的运算周期内发送。在实际的神经网络的使用中,也可以将一个前端计算核与多个运算周期进行对应,或一个运算周期与多个前端计算核进行 对应,从而进一步的提高当前计算核的信息接收能力,但其基本的原理,均与前端计算核和运算周期一一相同,因此不再赘述,实际使用中根据需求进行灵活的设定即可。
步骤S300,根据所述复用规则,在当前运算步,分别接收各所述前端计算核输出的神经元信息。
具体的,当前计算核设定好复用规则后,在当前的运算步内的各运算周期,只接收与当前运算周期对应的前端计算核发送的神经元信息即可。
在本实施例中,通过设置前端计算核复用组,使得当前计算核按照设定好的复用规则,在当前运算步的各运算周期,分别接收不同的前端计算核发送的神经元信息,以使当前计算核在当前运算步的时长内,能够接收更多的前端计算核发送的信息,提高了计算核信息接收的能力,从而提高整个神经网络的信息处理能力。
在其中一个实施例中,所述将运算步划分为至少两个运算周期,包括将运算步等间隔划分为至少两个运算周期。
具体的,也可将所述运算步进行非等间隔的划分,如有的运算周期长,有的运算周期短,以使输出的神经元信息的信息量大的前端计算核,对应于相对较长的运算周期。从而保证神经元信息的接收完整性。其运算周期的长短的分配,根据需求灵活设定。
在本实施例中,通过将运算步等间隔划分为运算周期的方法,使得当前计算核可以按照设定好的时间间隔接收不同的前端计算核发送的神经元信息,而不用再去对运算周期进行时长的计量,实现方式更加简单可靠,提高了神经网络的信息处理效率。
在其中一个实施例中,所述配置当前计算核内各神经元的复用规则,包括分别配置当前计算核内各神经元的树突和胞体的复用规则。
具体的,当前计算核中的各神经元的树突,用于接收前端神经元发送的信息,当前计算核中的各神经元的胞体,用于计算所述树突接收到的信息。在当前计算核的复用规则中,将计算核中的各神经元的树突和胞体分别配置相应的复用规则,如,在当前STEP的哪些PHASE,用于树突接收前端神经元输出的神经元信息,所述胞体在当前STEP的哪个PHASE进行历史膜电位信息的处理等,因其处理的信息不冲突,所述指定的胞体的对应PHASE可以与所述树突所对应的PHASE重合。
为给当前神经元预留时间进行当前STEP的信息的计算,所述树突的胞体的复用后,会在所有树突和胞体的对应的PHASE后,在STEP的后面预留至少一个PHASE,供当前神经元计算使用。
在本实施例中,通过分别配置当前计算核内的神经元的树突和胞体的复用规则,可以使当前计算核的信息处理更有效率。
在其中一个实施例中,所述前端计算核输出的神经元信息,包括前端计算核持续输出的人工神经元信息。
具体的,在当前计算核进行信息接收时进行复用时,若前端计算核是采用传统的非复用的发送方式时,需在前端计算核为人工神经元,且发送方式为持续发送。
在本实施例中,所述接收的前端计算核输出的神经元信息为持续输出的人工神经元信息,可以使得当前计算核处理按照传统的信息发送方式发送前端计算核发送的神经元信息。
图2为另一个实施例的神经网络计算核信息处理方法的流程示意图,如图2所示的神经网络计算核信息处理方法,包括:
步骤S90,确定当前计算核的信息处理模式为复用模式,所述信息处理模式还包括非复用模式。
具体的,当前计算核可以选择工作在复用模式,也可选择工作在非复用模式,所述非复用模式即为传统技术中的工作模式。
步骤S100,确定前端计算核复用组,所述前端计算核复用组包括至少两个前端计算核。
步骤S200,根据所述前端计算核复用组,配置当前计算核的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端计算核的数量,将所述前端计算核复用组中的各前端计算核元分别与所述运算周期一一对应。
步骤S300,根据所述复用规则,在当前运算步,分别接收各所述前端计算核输出的神经元信息。
在本实施例中,提供的信息处理模式,可以使得当前计算核选择是否工作在复用模式下,兼容传统的神经信息处理方式,提高神经网络的整体信息处理能力。
图3为一个实施例的神经网络计算核信息处理系统的结构示意图,如图3所示的神经网络计算核信息处理系统,包括:
复用组确定模块100,用于确定前端计算核复用组,所述前端计算核复用组包括至少两个前端计算核。
运算周期分配模块200,用于根据所述前端计算核复用组,配置当前计算核的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端计算核的数量,将所述前端计算核复用组中的各前端计算核元分别与所述运算周期一一 对应。用于将运算步等间隔划分为至少两个运算周期。用于分别配置当前计算核内各神经元的树突和胞体的复用规则。
神经元信息接收模块300,用于根据所述复用规则,在当前运算步,分别接收各所述前端计算核输出的神经元信息。用于接收所述前端计算核持续输出的人工神经元信息。
在本实施例中,通过设置前端计算核复用组,使得当前计算核按照设定好的复用规则,在当前运算步的各运算周期,分别接收不同的前端计算核发送的神经元信息,以使当前计算核在当前运算步的时长内,能够接收更多的前端计算核发送的信息,提高了计算核信息接收的能力,从而提高整个神经网络的信息处理能力。通过将运算步等间隔划分为运算周期的方法,使得当前计算核可以按照设定好的时间间隔接收不同的前端计算核发送的神经元信息,而不用再去对运算周期进行时长的计量,实现方式更加简单可靠,提高了神经网络的信息处理效率。通过分别配置当前计算核内的神经元的树突和胞体的复用规则,可以使当前计算核的信息处理更有效率。所述接收的前端计算核输出的神经元信息为持续输出的人工神经元信息,可以使得当前计算核处理按照传统的信息发送方式发送前端计算核发送的神经元信息。
图4为另一个实施例的神经网络计算核信息处理系统的结构示意图,如图4所示的神经网络计算核信息处理系统,包括:
处理模式确定模块90,用于确定当前计算核的信息处理模式为复用模式,所述信息处理模式还包括非复用模式。
复用组确定模块100,用于确定前端计算核复用组,所述前端计算核复用组包括至少两个前端计算核。
运算周期分配模块200,用于根据所述前端计算核复用组,配置当前计算核的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端计算核的数量,将所述前端计算核复用组中的各前端计算核元分别与所述运算周期一一对应。
神经元信息接收模块300,用于根据所述复用规则,在当前运算步,分别接收各所述前端计算核输出的神经元信息。
在本实施例中,提供的信息处理模式,可以使得当前计算核选择是否工作在复用模式下,兼容传统的神经信息处理方式,提高神经网络的整体信息处理能力。
在其中一个实施例中,可以通过寄存器的方式,实现当前计算核的复用,如表1所示:
表1
Figure PCTCN2017114662-appb-000001
图5为结合表1给出的本实施例的示意图,表1给出了当前计算核内的神经元的树突和胞体的复用的寄存器的实现方式之一,其中D_type,标识树突的处理模式的选择,当其为0时,是现有的处理模式,不复用,当其为1时,当前神经元的树突采用复用模式。所述的位宽为1,表示利用1个bit的字节描述此变量。D_start_phase为树突计算起始有效运算周期,D_end_phase为树突计算最后有效运算周期,两者配合使用,用于在寄存器中指明复用的运算周期的位置。表1中后半部的胞体与树突部分相同。
图6为一个实施例的神经元信息发送方法的流程示意图,如图6所示的神经元信息发送方法,包括:
步骤S110,确定神经元分用组,所述神经元分用组包括至少两个神经元。
具体的,为使一个神经网络中的神经元在一个运算步内能够发送更多的神经元信息,将各神经元在一个运算步内进行分用,需要确定进行分用的神经元的数量和范围,可以根据神经网络所执行的任务的需求,灵活设定任意数量的神经元进行分用,只要分用后,神经元用于发送信息的时长够用即可。
所述运算步(STEP),为神经元进行信息处理的一个固定的时长,神经网络中所有的神经元,均按照所述的运算步同步处理数据。
步骤S120,根据所述神经元分用组,配置当前神经元的分用规则,所述分用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述神经元的数量,将所述神经元分用组中的各神经元分别与所述运算周期对应。
具体的,所述将运算步划分为至少两个运算周期,即,将一个STEP划分为多个运算周 期(也可称为PHASE),为保证进行分用的神经元都能与所述的运算周期进行对应,需要将运算周期的个数设定为大于或等于进行分用的所述前端神经元的数量。
将所述神经元分用组中的各神经元分别与所述运算周期对应,是指将前端神经元发送的信息,在与之对应的运算周期内发送。
步骤S130,根据所述分用规则,在当前运算步的各运算周期内,所述神经元分用组中的各所述神经元分别在其对应的运算周期内输出神经元信息。
具体的,当前神经元设定好分用规则后,在当前的运算步内的对应的运算周期,发送与当前运算周期对应的神经元信息即可。
在本实施例中,通过设置神经元分用组,使得当前神经元按照设定好的分用规则,在当前运算步,按照设定好的运算周期有序发送神经元信息,以使在当前运算步的时长内,神经元能够发送更多的信息,提高了神经元信息发送的能力,从而提高整个神经网络的信息处理能力。
在其中一个实施例中,所述将运算步划分为至少两个运算周期,包括将运算步等间隔划分为至少两个运算周期。
具体的,也可将所述运算步进行非等间隔的划分,如有的运算周期长,有的运算周期短,以使输出的神经元信息的信息量大的神经元,对应于相对较长的运算周期。从而保证神经元信息发送的完整性。其运算周期的长短的分配,根据需求灵活设定。
在本实施例中,通过将运算步等间隔划分为运算周期的方法,使得当前神经元可以按照设定好的时间间隔发送神经元信息,而不用再去对运算周期进行时长的计量,实现方式更加简单可靠,提高了神经网络的信息处理效率。
在其中一个实施例中,所述将所述神经元分用组中的各神经元分别对应所述运算周期,包括:
所述神经元分用组中的一个神经元对应一个所述运算周期,或
所述神经元分用组中的一个神经元对应多个所述运算周期,且所述一个运算周期只对应一个所述神经元。
具体的,采用神经元与运算周期进行一一对应,或一个神经元对应多个运算周期的方式,可以保证神经元输出信息的完整性,并使得神经元的分用更加的灵活。
在本实施例中,神经元可以与运算周期进行一一对应,也可以一个神经元对应多个运算周期,以使发送的神经元信息的信息量大的神经元,可以有足够的运算周期的时长发送信息, 保证神经元信息发送的完整性。
在其中一个实施例中,所述在其对应的运算周期内输出神经元信息,包括:输出人工神经元信息或脉冲神经元信息。
具体的,支持人工神经元信息的输出和脉冲神经元信息的输出,兼容人工神经网络和脉冲神经网络。
在本实施例中,可以输出人工神经元信息,也可以输出脉冲神经元信息,提高了人工神经网络和脉冲神经网络的信息发送能力。
图7为另一个实施例的神经元信息发送方法的流程示意图,如图7示的神经元信息发送方法,包括:
步骤S101,确定当前神经元的信息处理模式为分用模式,所述信息处理模式还包括非分用模式。
具体的,当前神经元可以选择工作在分用模式,也可选择工作在非分用模式,所述非分用模式即为传统技术中的工作模式。
步骤S110,确定神经元分用组,所述神经元分用组包括至少两个神经元。
步骤S120,根据所述神经元分用组,配置当前神经元的分用规则,所述分用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述神经元的数量,将所述神经元分用组中的各神经元分别与所述运算周期对应。
步骤S130,根据所述分用规则,在当前运算步的各运算周期内,所述神经元分用组中的各所述神经元分别在其对应的运算周期内输出神经元信息。
在本实施例中,提供的信息处理模式,可以使得当前神经元选择是否工作在分用模式下,兼容传统的神经信息处理方式,提高神经网络的整体信息处理能力。
图8为一个实施例的神经元信息发送系统的结构示意图,如图8示的神经元信息发送系统,包括:
分用组确定模块110,用于确定神经元分用组,所述神经元分用组包括至少两个神经元。
运算周期分配模块120,用于根据所述神经元分用组,配置当前神经元的分用规则,所述分用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述神经元的数量,将所述神经元分用组中的各神经元分别与所述运算周期对应。用于将运算步等间隔划分为至少两个运算周期。用于将所述神经元分用组中的一个神经元对应一个所述运算周期,或将所述神经元分用组中的一个神经元对应多个所述运算周期,且所述一个运算周期 只对应一个所述神经元。
神经元信息发送模块130,用于根据所述分用规则,在当前运算步的各运算周期内,所述神经元分用组中的各所述神经元分别在其对应的运算周期内输出神经元信息。用于输出人工神经元信息或脉冲神经元信息。
在本实施例中,通过设置神经元分用组,使得当前神经元按照设定好的分用规则,在当前运算步,按照设定好的运算周期有序发送神经元信息,以使在当前运算步的时长内,神经元能够发送更多的信息,提高了神经元信息发送的能力,从而提高整个神经网络的信息处理能力。通过将运算步等间隔划分为运算周期的方法,使得当前神经元可以按照设定好的时间间隔发送神经元信息,而不用再去对运算周期进行时长的计量,实现方式更加简单可靠,提高了神经网络的信息处理效率。神经元可以与运算周期进行一一对应,也可以一个神经元对应多个运算周期,以使发送的神经元信息的信息量大的神经元,可以有足够的运算周期的时长发送信息,保证神经元信息发送的完整性。可以输出人工神经元信息,也可以输出脉冲神经元信息,提高了人工神经网络和脉冲神经网络的信息发送能力。
图9为另一个实施例的神经元信息发送系统的结构示意图,如图9所示的神经元信息发送系统,包括:
处理模式确定模块101,用于确定当前神经元的信息处理模式为分用模式,所述信息处理模式还包括非分用模式。
分用组确定模块110,用于确定神经元分用组,所述神经元分用组包括至少两个神经元。
运算周期分配模块120,用于根据所述神经元分用组,配置当前神经元的分用规则,所述分用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述神经元的数量,将所述神经元分用组中的各神经元分别与所述运算周期对应。
神经元信息发送模块130,用于根据所述分用规则,在当前运算步的各运算周期内,所述神经元分用组中的各所述神经元分别在其对应的运算周期内输出神经元信息。
在本实施例中,提供的信息处理模式,可以使得当前神经元选择是否工作在分用模式下,兼容传统的神经信息处理方式,提高神经网络的整体信息处理能力。
图10为一个实施例的神经元信息接收方法的流程示意图,如图10为一个实施例的神经元信息接收方法,包括:
步骤S210,确定前端神经元复用组,所述前端神经元复用组包括至少两个前端神经元。
具体的,为使所述当前神经元在一个运算步内能够接收更多的前端神经元输入的信息, 将前端神经元在一个运算步内进行复用,需要确定进行复用的前端神经元的数量和范围,可以根据神经网络所执行的任务的需求,灵活设定任意数量的神经元进行复用,只要复用后,神经元用于发送信息的时长够用即可。
所述运算步(STEP),为神经元进行信息处理的一个固定的时长,神经网络中所有的神经元,均按照所述的运算步同步处理数据。
步骤S220,根据所述前端神经元复用组,配置当前神经元的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端神经元的数量,将所述前端神经元复用组中的各前端神经元分别与所述运算周期一一对应。
具体的,所述将运算步划分为至少两个运算周期,即,将一个STEP划分为多个运算周期(也可成为PHASE),为保证进行复用的前端神经元都能与所述的运算周期进行对应,需要将运算周期的个数设定为大于或等于进行复用的所述前端神经元的数量。
所述将所述前端神经元复用组中的各前端神经元分别与所述运算周期一一对应,是指将前端神经元发送的信息,只在一个与之对应的运算周期内发送。在实际的神经网络的使用中,也可以将一个前端神经元与多个运算周期进行对应,或一个运算周期与多个前端神经元进行对应,从而进一步的提高当前神经元的信息接收能力,但其基本的原理,均与前端神经元和运算周期一一相同,因此不再赘述,实际使用中根据需求进行灵活的设定即可。
步骤S230,根据所述复用规则,在当前运算步内,分别接收各所述前端神经元输出的神经元信息。
具体的,当前神经元设定好复用规则后,在当前的运算步内的各运算周期,只接收与当前运算周期对应的前端神经元发送的神经元信息即可。
在本实施例中,通过设置前端神经元复用组,使得当前神经元按照设定好的复用规则,在当前运算步的各运算周期,分别接收不同的前端神经元发送的神经元信息,以使当前神经元在当前运算步的时长内,能够接收更多的前端神经元发送的信息,提高了神经元信息接收的能力,从而提高整个神经网络的信息处理能力。
在其中一个实施例中,所述将运算步划分为至少两个运算周期,包括将运算步等间隔划分为至少两个运算周期。
具体的,也可将所述运算步进行非等间隔的划分,如有的运算周期长,有的运算周期短,以使输出的神经元信息的信息量大的前端神经元,对应于相对较长的运算周期。从而保证神经元信息的接收完整性。其运算周期的长短的分配,根据需求灵活设定。
在其中一个实施例中,通过将运算步等间隔划分为运算周期的方法,使得当前神经元可以按照设定好的时间间隔接收不同的前端神经元发送的神经元信息,而不用再去对运算周期进行时长的计量,实现方式更加简单可靠,提高了神经网络的信息处理效率。
在其中一个实施例中,所述配置当前神经元的复用规则,包括分别配置当前神经元的树突和胞体的复用规则。
具体的,当前神经元的树突,用于接收前端神经元发送的信息,当前神经元的胞体,用于计算所述树突接收到的信息。在当前神经元的复用规则中,将树突和胞体分别配置相应的复用规则,如,在当前STEP的哪些PHASE,用于树突接收前端神经元输出的神经元信息,所述胞体在当前STEP的哪个PHASE进行历史膜电位信息的处理等,因其处理的信息不冲突,所述指定的胞体的对应PHASE可以与所述树突所对应的PHASE重合。
为给当前神经元预留时间进行当前STEP的信息的计算,所述树突的胞体的复用后,会在所有树突和胞体的对应的PHASE后,在STEP的后面预留至少一个PHASE,供当前神经元计算使用。
在本实施例中,通过分别配置当前神经元的树突和胞体的复用规则,可以使当前神经元的信息处理更有效率。
在其中一个实施例中,所述前端神经元输出的神经元信息,包括所述前端神经元持续输出的人工神经元信息。
具体的,在当前神经元进行信息接收时进行复用时,若前端神经元是采用传统的非复用的发送方式时,需在前端神经元为人工神经元,且发送方式为持续发送。
在本实施例中,所述接收的前端神经元输出的神经元信息为持续输出的人工神经元信息,可以使得当前神经元处理按照传统的信息发送方式发送前端神经元发送的神经元信息。
图11为另一个实施例的神经元信息接收方法的流程示意图,如图11为一个实施例的神经元信息接收方法,包括:
步骤S201,确定当前神经元的信息处理模式为复用模式,所述信息处理模式还包括非复用模式。
具体的,当前神经元可以选择工作在复用模式,也可选择工作在非复用模式,所述非复用模式即为传统技术中的工作模式。
步骤S210,确定前端神经元复用组,所述前端神经元复用组包括至少两个前端神经元。
步骤S220,根据所述前端神经元复用组,配置当前神经元的复用规则,所述复用规则为 将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端神经元的数量,将所述前端神经元复用组中的各前端神经元分别与所述运算周期一一对应。
步骤S230,根据所述复用规则,在当前运算步内,分别接收各所述前端神经元输出的神经元信息。
在本实施例中,提供的信息处理模式,可以使得当前神经元选择是否工作在复用模式下,兼容传统的神经信息处理方式,提高神经网络的整体信息处理能力。
图12为一个实施例的神经元信息接收系统的结构示意图,如图12为一个实施例的神经元信息接收系统,包括:
复用组确定模块210,用于确定前端神经元复用组,所述前端神经元复用组包括至少两个前端神经元。
运算周期分配模块220,用于根据所述前端神经元复用组,配置当前神经元的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端神经元的数量,将所述前端神经元复用组中的各前端神经元分别与所述运算周期一一对应;用于将运算步等间隔划分为至少两个运算周期。还用于分别配置当前神经元的树突和胞体的复用规则。
神经元信息接收模块230,用于根据所述复用规则,在当前运算步内,分别接收各所述前端神经元输出的神经元信息。用于接收所述前端神经元持续输出的人工神经元信息。
在本实施例中,通过设置前端神经元复用组,使得当前神经元按照设定好的复用规则,在当前运算步的各运算周期,分别接收不同的前端神经元发送的神经元信息,以使当前神经元在当前运算步的时长内,能够接收更多的前端神经元发送的信息,提高了神经元信息接收的能力,从而提高整个神经网络的信息处理能力。通过将运算步等间隔划分为运算周期的方法,使得当前神经元可以按照设定好的时间间隔接收不同的前端神经元发送的神经元信息,而不用再去对运算周期进行时长的计量,实现方式更加简单可靠,提高了神经网络的信息处理效率。通过分别配置当前神经元的树突和胞体的复用规则,可以使当前神经元的信息处理更有效率。所述接收的前端神经元输出的神经元信息为持续输出的人工神经元信息,可以使得当前神经元处理按照传统的信息发送方式发送前端神经元发送的神经元信息。
图13为另一个实施例的神经元信息接收系统的结构示意图,如图13为一个实施例的神经元信息接收系统,包括:
处理模式确定模块201,用于确定当前神经元的信息处理模式为复用模式,所述信息处 理模式还包括非复用模式。
复用组确定模块210,用于确定前端神经元复用组,所述前端神经元复用组包括至少两个前端神经元。
运算周期分配模块220,用于根据所述前端神经元复用组,配置当前神经元的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端神经元的数量,将所述前端神经元复用组中的各前端神经元分别与所述运算周期一一对应;用于将运算步等间隔划分为至少两个运算周期。还用于分别配置当前神经元的树突和胞体的复用规则。
神经元信息接收模块230,用于根据所述复用规则,在当前运算步内,分别接收各所述前端神经元输出的神经元信息。用于接收所述前端神经元持续输出的人工神经元信息。
在本实施例中,提供的信息处理模式,可以使得当前神经元选择是否工作在复用模式下,兼容传统的神经信息处理方式,提高神经网络的整体信息处理能力。
在其中一个实施例中,可以通过寄存器的方式,实现当前神经元的复用,如表1所示:
表1
Figure PCTCN2017114662-appb-000002
图14为结合表1给出的本实施例的示意图,表1给出了当前神经元的树突和胞体的复用的寄存器的实现方式之一,其中D_type,标识树突的处理模式的选择,当其为0时,是现有的不进行复用的处理模式,每个树突按照一个STEP接收一个前端神经元信息的方式,不进行复用,当其为1时,当前神经元的树突采用复用模式。所述的位宽为1,表示利用1个bit的字节描述此变量。D_start_phase为树突计算起始有效运算周期,D_end_phase为树突计算 最后有效运算周期,两者配合使用,用于在寄存器中指明复用的运算周期的位置。表1中后半部的胞体与树突部分相同。
基于同样的发明思想,本发明一个实施例还提供一种计算机设备,包括存储器、处理器,及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现上述实施例所提及方法的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序或指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。

Claims (11)

  1. 一种神经网络计算核信息处理方法,其特征在于,所述方法包括:
    确定前端计算核复用组,所述前端计算核复用组包括至少两个前端计算核;
    根据所述前端计算核复用组,配置当前计算核的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端计算核的数量,将所述前端计算核复用组中的各前端计算核元分别与所述运算周期一一对应;
    根据所述复用规则,在当前运算步,分别接收各所述前端计算核输出的神经元信息。
  2. 根据权利要求1所述的神经网络计算核信息处理方法,其特征在于,所述将运算步划分为至少两个运算周期,包括:
    将运算步等间隔划分为至少两个运算周期。
  3. 根据权利要求1所述的神经网络计算核信息处理方法,其特征在于,所述配置当前计算核内各神经元的复用规则,包括:
    分别配置当前计算核内各神经元的树突和胞体的复用规则。
  4. 根据权利要求1所述的神经网络计算核信息处理方法,其特征在于,所述前端计算核输出的神经元信息,包括:
    前端计算核持续输出的人工神经元信息。
  5. 根据权利要求1所述的神经网络计算核信息处理方法,其特征在于,在确定前端计算核复用组的步骤之前,所述方法还包括:
    确定当前计算核的信息处理模式为复用模式,所述信息处理模式还包括非复用模式。
  6. 一种神经元信息发送方法,其特征在于,所述方法包括:
    确定神经元分用组,所述神经元分用组包括至少两个神经元;
    根据所述神经元分用组,配置当前神经元的分用规则,所述分用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述神经元的数量,将所述神经元分用组中的各神经元分别与所述运算周期对应;
    根据所述分用规则,在当前运算步的各运算周期内,所述神经元分用组中的各所述神经元分别在其对应的运算周期内输出神经元信息。
  7. 一种神经元信息接收方法,其特征在于,所述方法包括:
    确定前端神经元复用组,所述前端神经元复用组包括至少两个前端神经元;
    根据所述前端神经元复用组,配置当前神经元的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端神经元的数量,将所述前端神经元复用组中的各前端神经元分别与所述运算周期一一对应;
    根据所述复用规则,在当前运算步内,分别接收各所述前端神经元输出的神经元信息。
  8. 一种计算机设备,其特征在于,包括存储器、处理器,及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现权利要求1-7中任意一项方 法的步骤。
  9. 一种神经网络计算核信息处理系统,其特征在于,包括:
    复用组确定模块,用于确定前端计算核复用组,所述前端计算核复用组包括至少两个前端计算核;
    运算周期分配模块,用于根据所述前端计算核复用组,配置当前计算核的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端计算核的数量,将所述前端计算核复用组中的各前端计算核元分别与所述运算周期一一对应;
    神经元信息接收模块,用于根据所述复用规则,在当前运算步,分别接收各所述前端计算核输出的神经元信息。
  10. 一种神经元信息发送系统,其特征在于,包括:
    分用组确定模块,用于确定神经元分用组,所述神经元分用组包括至少两个神经元;
    运算周期分配模块,用于根据所述神经元分用组,配置当前神经元的分用规则,所述分用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述神经元的数量,将所述神经元分用组中的各神经元分别与所述运算周期对应;
    神经元信息发送模块,用于根据所述分用规则,在当前运算步的各运算周期内,所述神经元分用组中的各所述神经元分别在其对应的运算周期内输出神经元信息。
  11. 一种神经元信息接收系统,其特征在于,包括:
    复用组确定模块,用于确定前端神经元复用组,所述前端神经元复用组包括至少两个前端神经元;
    运算周期分配模块,用于根据所述前端神经元复用组,配置当前神经元的复用规则,所述复用规则为将运算步划分为至少两个运算周期,且所述运算周期的数量大于或等于所述前端神经元的数量,将所述前端神经元复用组中的各前端神经元分别与所述运算周期一一对应;
    神经元信息接收模块,用于根据所述复用规则,在当前运算步内,分别接收各所述前端神经元输出的神经元信息。
    Figure PCTCN2017114662-appb-100001
PCT/CN2017/114662 2017-02-17 2017-12-05 神经网络计算核信息处理方法、系统和计算机设备 WO2018149217A1 (zh)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201710085547.X 2017-02-17
CN201710085540.8A CN106971228B (zh) 2017-02-17 2017-02-17 神经元信息发送方法和系统
CN201710085547.XA CN106971229B (zh) 2017-02-17 2017-02-17 神经网络计算核信息处理方法和系统
CN201710085556.9A CN106971227B (zh) 2017-02-17 2017-02-17 神经元信息接收方法和系统
CN201710085556.9 2017-02-17
CN201710085540.8 2017-02-17

Publications (1)

Publication Number Publication Date
WO2018149217A1 true WO2018149217A1 (zh) 2018-08-23

Family

ID=63169155

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/114662 WO2018149217A1 (zh) 2017-02-17 2017-12-05 神经网络计算核信息处理方法、系统和计算机设备

Country Status (1)

Country Link
WO (1) WO2018149217A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678002A (zh) * 2013-12-09 2014-03-26 华为技术有限公司 资源复用的控制方法和装置
US20150058268A1 (en) * 2012-01-27 2015-02-26 International Business Machines Corporation Hierarchical scalable neuromorphic synaptronic system for synaptic and structural plasticity
CN106030622A (zh) * 2014-02-21 2016-10-12 高通股份有限公司 原位神经网络协同处理
CN106203621A (zh) * 2016-07-11 2016-12-07 姚颂 用于卷积神经网络计算的处理器
CN106971228A (zh) * 2017-02-17 2017-07-21 清华大学 神经元信息发送方法和系统
CN106971227A (zh) * 2017-02-17 2017-07-21 清华大学 神经元信息接收方法和系统
CN106971229A (zh) * 2017-02-17 2017-07-21 清华大学 神经网络计算核信息处理方法和系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150058268A1 (en) * 2012-01-27 2015-02-26 International Business Machines Corporation Hierarchical scalable neuromorphic synaptronic system for synaptic and structural plasticity
CN103678002A (zh) * 2013-12-09 2014-03-26 华为技术有限公司 资源复用的控制方法和装置
CN106030622A (zh) * 2014-02-21 2016-10-12 高通股份有限公司 原位神经网络协同处理
CN106203621A (zh) * 2016-07-11 2016-12-07 姚颂 用于卷积神经网络计算的处理器
CN106971228A (zh) * 2017-02-17 2017-07-21 清华大学 神经元信息发送方法和系统
CN106971227A (zh) * 2017-02-17 2017-07-21 清华大学 神经元信息接收方法和系统
CN106971229A (zh) * 2017-02-17 2017-07-21 清华大学 神经网络计算核信息处理方法和系统

Similar Documents

Publication Publication Date Title
CN108334942B (zh) 神经网络的数据处理方法、装置、芯片和存储介质
US20200073830A1 (en) Method, apparatus, and system for an architecture for machine learning acceleration
CN111079921A (zh) 一种基于异构分布式系统的高效神经网络训练调度方法
US20160292566A1 (en) Signal processing module, especially for a neural network and a neuronal circuit
JP2019145111A (ja) データに基づく関数モデルを計算するためのモデル計算ユニット、制御装置、及び方法
CN106845632B (zh) 脉冲神经网络信息转换为人工神经网络信息的方法和系统
CN106951063B (zh) 数据管理方法和使用缓存的设备
CN106971229B (zh) 神经网络计算核信息处理方法和系统
Hjort Bootstrapping Cox's regression model
CN114356578A (zh) 自然语言处理模型的并行计算方法、装置、设备及介质
Lim et al. Distributed deep learning framework based on shared memory for fast deep neural network training
CN113655986B (zh) 一种基于numa亲和性的fft卷积算法并行实现方法及系统
US9430736B2 (en) Firing rate independent spike message passing in large scale neural network modeling
WO2018149217A1 (zh) 神经网络计算核信息处理方法、系统和计算机设备
CN106407005A (zh) 一种基于多尺度耦合的并行进程合并方法及系统
CN115879543B (zh) 一种模型训练方法、装置、设备、介质及系统
CN106971227B (zh) 神经元信息接收方法和系统
CN104731709A (zh) 一种基于jcudasa_bp算法的软件缺陷预测方法
Dumachev On semideterministic finite automata games type
EP3651141B1 (en) Secure computing system, secure computing device, secure computing method, program, and recording medium
CN114980450B (zh) 一种分区域光环境在线评估展示方法、系统、介质及设备
CN106971228B (zh) 神经元信息发送方法和系统
CN107832154A (zh) 一种多进程处理方法、处理装置及应用
CN112416053A (zh) 多核架构的同步信号产生电路、芯片和同步方法及装置
CN113297993B (zh) 神经刺激信号确定装置和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17896646

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17896646

Country of ref document: EP

Kind code of ref document: A1