CN110188872B - Heterogeneous cooperative system and communication method thereof - Google Patents

Heterogeneous cooperative system and communication method thereof Download PDF

Info

Publication number
CN110188872B
CN110188872B CN201910488020.0A CN201910488020A CN110188872B CN 110188872 B CN110188872 B CN 110188872B CN 201910488020 A CN201910488020 A CN 201910488020A CN 110188872 B CN110188872 B CN 110188872B
Authority
CN
China
Prior art keywords
information
computing unit
pulse
neural network
routing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910488020.0A
Other languages
Chinese (zh)
Other versions
CN110188872A (en
Inventor
施路平
王冠睿
邹哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lynxi Technology Co Ltd
Original Assignee
Beijing Lynxi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lynxi Technology Co Ltd filed Critical Beijing Lynxi Technology Co Ltd
Priority to CN201910488020.0A priority Critical patent/CN110188872B/en
Publication of CN110188872A publication Critical patent/CN110188872A/en
Priority to PCT/CN2020/090562 priority patent/WO2020244370A1/en
Application granted granted Critical
Publication of CN110188872B publication Critical patent/CN110188872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Abstract

The embodiment of the invention discloses a heterogeneous cooperative system and a communication method thereof, wherein the input type of a target computing unit is obtained according to a routing table, and a routing packet containing a data type matched with the input type of the target computing unit is determined according to the output information of the computing unit so as to be transmitted to the target computing unit, so that different data types can be transmitted to multiplex the routing packet with a uniform format, the cost required by data transmission among computing units of different types is reduced, the delay time is further reduced, the data processing efficiency is improved, and the universality and the flexibility of the heterogeneous cooperative system are improved.

Description

Heterogeneous cooperative system and communication method thereof
Technical Field
The invention relates to the technical field of computers, in particular to a heterogeneous cooperative system and a communication method thereof.
Background
The neural network is a computing system for simulating a biological brain synapse-neuron structure to process data, and consists of computing nodes divided into multiple layers and connections among the layers. The neural network has strong non-linear and adaptive data processing capability.
Typical neural networks today include artificial neural networks and impulse neural networks. The artificial neural network is based on simplification of a neural model and high abstraction of a brain network, the artificial neurons are connected into a network according to a certain structure, the input and the output of the network are numerical quantity information, a deeper network model can be constructed by taking the brain hierarchical structure as reference, and the obvious advantages are shown in the problems of feature extraction, mode identification and the like. However, the numerical value information loses the time information to a certain extent, and a large-scale network consumes more computing resources and consumes more energy. The pulse neural network is closer to an actual biological model, a differential equation is utilized to model neurons, the input and the output of the pulse neural network are pulse sequences represented by 0/1, calculation tasks are completed through membrane potential accumulation and threshold value distribution, the pulse neural network has network dynamics characteristics, contains rich time information, has certain advantages in processing sequence problems, and an event-driven processing mode also brings low power consumption characteristics. But it is to be promoted in operation accuracy, large-scale data and network. Therefore, the task scene of complex artificial general intelligence, such as a task scene which needs both accurate numerical values and quick response, cannot be processed by adopting the artificial neural network or the impulse neural network alone.
Disclosure of Invention
In view of this, embodiments of the present invention provide a heterogeneous cooperative system and a communication method thereof, so as to transmit different data types to multiplex a routing packet with a uniform format, reduce the cost required for data transmission between different types of computing units, further reduce delay time, improve data processing efficiency, and improve the universality and flexibility of the heterogeneous cooperative system.
In a first aspect, an embodiment of the present invention provides a communication method for a heterogeneous collaborative system, where the system includes at least two computing units, and the method includes:
the method comprises the steps that a first computing unit periodically receives a first routing packet, wherein the first routing packet comprises first address information and a first data load, the first data load is matched with an input type of the first computing unit, and the first data load is numerical quantity information used for artificial neural network computing or pulse information used for pulse neural network computing;
in response to the first address information matching the address information of the first computing unit, the first computing unit determining input information from the first data payload;
processing the input information and determining output information;
determining a second routing packet according to the output information and a routing table, wherein the second routing packet comprises second address information and a second data load, and the type of the second data load is matched with the input type of a second computing unit corresponding to the second address information; the routing table comprises an input type, address information and a routing connection relation of a computing unit;
and sending the second routing packet.
Further, the at least two calculation units comprise at least one artificial neural network calculation unit and at least one impulse neural network calculation unit; or
The at least two computing units comprise at least two hybrid neural network computing units which simultaneously support artificial neural network computing and impulse neural network computing; or
The at least two computing units include at least one artificial neural network computing unit, at least one impulse neural network computing unit, and at least one hybrid neural network computing unit.
Further, the output information of the first calculation unit is a numerical quantity, and determining the second routing packet according to the output information and the routing table includes:
acquiring second address information and an input type of the second computing unit from the routing table in response to the numerical quantity being within a threshold interval;
responding to the input type of the second calculation unit as pulse information, and acquiring corresponding pulse state information from the routing table;
determining the second routing packet according to the second address information and the corresponding pulse state information, wherein a second data load in the second routing packet is pulse information; wherein the pulse information includes the pulse state information.
Further, the output information of the first calculation unit is a numerical quantity, and determining the second routing packet according to the output information and the routing table further includes:
acquiring second address information and an input type of the second calculation unit from the routing table in response to the numerical quantity information being within a threshold interval;
and in response to the input type of the second computing unit being numerical quantity information, determining the second routing packet according to the second address information and the numerical quantity, wherein a second data load in the second routing packet is the numerical quantity information.
Further, the output information of the first calculation unit is a pulse, and determining the second routing packet according to the output information and the routing table includes:
acquiring second address information and an input type of the second calculation unit from the routing table in response to the membrane potential of the pulse neuron reaching a potential threshold;
responding to the input type of the second calculation unit as pulse information, and acquiring corresponding pulse state information from the routing table;
determining the second routing packet according to the second address information and the corresponding pulse state information, wherein a second data load in the second routing packet is pulse information; wherein the pulse information includes pulse state information.
Further, the output information of the first calculation unit is a pulse, and the determining the second routing packet according to the output information and the routing table further includes:
acquiring second address information and an input type of the second calculation unit from the routing table in response to the membrane potential of the pulse neuron reaching a potential threshold;
responding to the input type of the second computing unit as numerical quantity information, and acquiring corresponding numerical quantity information according to the membrane potential of the pulse neuron, wherein the numerical quantity information is the membrane potential of the pulse neuron or the difference value between the membrane potential of the pulse neuron and the potential threshold;
and determining the second routing packet according to the second address information and the numerical value information, wherein a second data load in the second routing packet is the numerical value information.
Further, the pulse information comprises a strong suppression signal flag bit, a forced release information flag bit and pulse delay information;
the strong suppression signal flag bit is used for suppressing pulse information, and the forced distribution information flag bit is used for forcibly generating the second routing packet.
In a second aspect, an embodiment of the present invention provides a heterogeneous collaboration system, where the system includes at least two computing units;
the calculation unit includes:
a receiving module configured to periodically receive a first routing packet, where the first routing packet includes first address information and a first data payload, and the first data payload is numerical quantity information for an artificial neural network or impulse information for an impulse neural network calculation;
an acquisition module configured to determine input information from the first data load in response to the first address information matching address information of the computing unit;
a processing module configured to process the input information and determine output information;
a determining module configured to determine a second routing packet according to the output information and a routing table, where the second routing packet includes second address information and a second data payload, and a type of the second data payload matches an input type of a computing unit corresponding to the second address information; the routing table comprises an input type, an address and a routing connection relation of a computing unit;
a transmitting module configured to transmit the second data packet.
In a third aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the method described above.
In a fourth aspect, embodiments of the present invention provide a computer program product, which when run on a computer, causes the computer to perform the method as described above.
According to the embodiment of the invention, the input type of the target computing unit is obtained according to the routing table, and the routing packet containing the data type matched with the input type of the target computing unit is determined according to the output information of the computing unit so as to be transmitted to the target computing unit, so that different data types can be transmitted to multiplex the routing packet with a uniform format, the cost required by data transmission among computing units of different types is reduced, the delay time is further reduced, the data processing efficiency is improved, and meanwhile, the universality and the flexibility of a heterogeneous cooperative system are improved.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of the embodiments of the present invention with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of a heterogeneous collaboration system of an embodiment of the invention;
FIG. 2 is a schematic diagram of a neural network computational unit of an embodiment of the present invention;
FIG. 3 is a schematic diagram of a routing packet according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another routing packet of an embodiment of the present invention;
FIG. 5 is a flow chart of a communication method of a heterogeneous collaborative system according to an embodiment of the present invention;
fig. 6 is a process diagram of a communication method of a heterogeneous cooperative system according to an embodiment of the present invention;
fig. 7 is another process diagram of a communication method of a heterogeneous cooperative system according to an embodiment of the present invention;
fig. 8 is a schematic device diagram of a neural network computing unit according to an embodiment of the present invention.
Detailed Description
The present invention will be described below based on examples, but the present invention is not limited to only these examples. In the following detailed description of the present invention, certain specific details are set forth. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details. Well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention.
Further, those of ordinary skill in the art will appreciate that the drawings provided herein are for illustrative purposes and are not necessarily drawn to scale.
Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, "is intended to indicate or imply relative importance. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
When a single neural network system is used for data processing, the calculation requirements of a scene needing accurate numerical processing and quick response at the same time cannot be met. Therefore, the calculation requirements can be met by adopting the hybrid calculation of the artificial neural network and the impulse neural network, but at present, when the hybrid calculation of the artificial neural network and the impulse neural network is carried out, an additional communication and signal conversion unit is often needed, so that additional time delay and hardware cost are brought, and the throughput rate of a hardware system is reduced. Therefore, the embodiment of the invention provides a heterogeneous cooperative system and a communication method thereof, so as to reduce the cost required by data transmission between different types of computing units, thereby reducing the delay time and improving the data processing efficiency.
Fig. 1 is a schematic diagram of a heterogeneous collaboration system according to an embodiment of the invention. As shown in fig. 1, the heterogeneous collaborative system 1 of the present embodiment includes artificial neural network computing units 11 to 13, impulse neural network computing units 14 to 16, and hybrid neural network computing units 17 to 19. It should be understood that the heterogeneous collaborative system according to the embodiment of the present invention includes at least two computing units, where the at least two computing units include at least one artificial neural network computing unit and at least one impulse neural network computing unit, or include at least two hybrid neural network computing units, or include at least one artificial neural network computing unit, at least one impulse neural network computing unit, and at least one hybrid neural network computing unit. In the embodiment of the present invention, the heterogeneous cooperative system 1 includes three artificial neural network computing units, three impulse neural network computing units, and three hybrid neural network computing units as an example for description, and it should be understood that the present embodiment is not limited thereto.
The input and output of the artificial neural network computing units 11-13 are numerical quantity information used for artificial neural network computation. The inputs and outputs of the spiking neural network computing units 14-16 are the spiking information used for the spiking neural network computations.
The hybrid neural network computing units 17 to 19 have the following four modes:
1. the impulse neural network computing mode, that is, in this mode, the input and the output of the hybrid neural network computing unit are both impulse information for the impulse neural network computing.
2. The artificial neural network computing mode, that is, in this mode, the input and the output of the hybrid neural network computing unit are both numerical quantity information for the artificial neural network computing.
3. And a first hybrid neural network computing mode in which the input of the hybrid neural network computing unit is impulse information for the impulse neural network computing and the output is numerical quantity information for the artificial neural network computing.
4. And a second hybrid neural network computing mode in which the input of the hybrid neural network computing unit is numerical quantity information for the artificial neural network computation and the output is impulse information for the impulse neural network computation.
Fig. 2 is a schematic diagram of a neural network computing unit of the present embodiment. As shown in fig. 2, the neural network computing unit 2 includes a plurality of neurons 21, and the plurality of neurons 21 are connected by synapses 22 to form a single-layer or multi-layer structure. Synapses 22 have synaptic weights, wherein a synaptic weight characterizes a weighted value of a post-synaptic neuron's reception of a pre-synaptic neuron's output. In the present embodiment, if the neural network computing unit 2 is an artificial neural network computing unit, the neuron 21 is an artificial neuron. If the neural network computing unit 2 is a spiking neural network computing unit, the neuron 21 is a spiking neuron. If the neural network computing unit 2 is a hybrid neural network computing unit, the neuron 21 is an artificial neuron or a spiking neuron, that is, the hybrid neural network computing unit includes an artificial neuron and a spiking neuron. Wherein, the membrane potential of the pulse neuron has time accumulation effect, and the output is 0 or 1 to represent whether pulse information is generated. Optionally, the output is 1 when the membrane potential of the pulse neuron reaches the potential threshold, and the output is 0 when the membrane potential of the pulse neuron is smaller than the potential threshold. The output of the artificial neuron is generated after nonlinear transformation according to the current synapse calculation result.
In an optional implementation manner, when a given task to be processed needs to be executed on the heterogeneous collaborative system, information such as a parameter of the given task to be processed is mapped to the configuration of each computing unit of the heterogeneous collaborative system through a predetermined mapping tool, and a corresponding routing table is obtained. The configuration of the computing unit comprises synaptic weights stored in the computing unit and used for synaptic computation, the number of inputs which can be actually received by each computing unit and the number of neurons which need to be computed actually, and the computing mode of the hybrid neural network computing unit. The routing table comprises input types and address information of all the computing units and routing connection relations among all the computing units. For example, the input type of the artificial neural network computing unit 11 is numerical quantity information, and the output of the artificial neural network computing unit 11 is the input of the impulse neural network computing unit 15.
In this embodiment, each computing unit in the heterogeneous cooperative system 1 communicates by means of a routing packet, where the routing packet includes address information and a data load, and the data load includes impulse information for impulse neural network computation or numerical quantity information for artificial neural network computation. The pulse information used for the calculation of the pulse neural network or the numerical quantity information used for the calculation of the artificial neural network are used for representing the output information of the neural network calculation unit. For example, in a routing table corresponding to a given task to be processed, the output of the artificial neural network computing unit 11 is the input of the spiking neural network computing unit 15. Assume that the transmission path of the routing packet output by the artificial neural network computing unit 11 is: the artificial neural network computing unit 11, the artificial neural network computing unit 12 and the impulse neural network computing unit 15. First, the artificial neural network computing unit 11 acquires the address information and the input type (i.e., impulse information) of the impulse neural network computing unit 15 by looking up the routing table. Then, the artificial neural network computing unit 11 sends the routing packet m containing the address information and the output information (impulse information) to the artificial neural network computing unit 12, and the artificial neural network computing unit 12 determines that the address information in the routing packet m is not matched with the address information of the local computing unit, and forwards the routing packet m to the impulse neural network computing unit 15 according to the address information. The spiking neural network computing unit 15 judges that the address information in the routing packet m matches the address information of the computing unit, acquires the corresponding spiking information from the routing packet m as input information, and processes the input information.
Fig. 3 is a schematic diagram of a routing packet according to an embodiment of the present invention. As shown in fig. 3, the data payload in the routing packet of the present embodiment is burst information. Take the length of the routing packet as 32 bits as an example, wherein the pulse information is located at 0-7 bits, and the address information is located at 8-31 bits. In the present embodiment, the pulse information includes pulse state information. The pulse state information comprises a strong suppression signal flag IE, a forced release information flag FF and pulse Delay information Delay. The pulse Delay information Delay is located at 0-7 bits in the routing packet, the forced issuing information flag bit FF is located at the 6 th bit, and the strong suppression signal flag bit IE is located at the 7 th bit. The address information comprises a horizontal direction relative distance Dx, a vertical direction relative distance Dy and an axon storage address A in the corresponding neural network computing unit. The axon storage address A in the corresponding neural network computing unit is located at 8-15 bits in the routing packet, the vertical relative distance Dy is located at 16-23 bits, and the horizontal relative distance Dx is located at 24-31 bits. Optionally, the axon storage addresses of the neurons participating in the calculation have corresponding pulse state information, and are stored in the routing table in advance. It should be understood that the length of each information in the routing packet and the position thereof in the routing packet are merely exemplary, and the present embodiment does not limit this.
In the neural network computing unit, each computing unit comprises a plurality of axons representing presynaptic neurons, the axons representing the presynaptic neurons are used as input, the dendrite input values corresponding to all the neurons in the computing unit are obtained through synaptic computation and transmitted to the neuron for computation, and then the output of the neuron is sent to another computing unit/own axon. Therefore, for a computing unit, the axon is an input device, and the neuron generates output information and is connected to an input device, which represents the output axon corresponding to the neuron, in another (or self) computing unit through a routing network.
In an optional implementation manner, the pulse state information is stored in a corresponding routing table in advance, and when the neural network computing unit determines a routing packet, the pulse state information corresponding to the axon storage address in the routing packet is acquired from the routing table. The strong inhibition signal flag in the pulse state information is used to inhibit the pulse information, that is, when the strong inhibition signal flag in the pulse information received by the pulse neuron is valid, the membrane potential is made to be in a state smaller than a potential threshold, for example, the membrane potential is pulled down to a minimum value, so as to force the neuron not to generate a routing packet in the working cycle. The forced release information flag bit in the pulse state information is used for forcibly generating the second routing packet, that is, when the forced release information flag bit in the pulse information received by the pulse neuron is valid, the membrane potential is made to be in a state larger than a potential threshold value, for example, the membrane potential is pulled up to a maximum value, so as to force the neuron to generate the routing packet in the working period. The pulse delay information in the pulse state information is used to represent the processing time of the pulse information, for example, if the pulse delay information in the pulse information acquired by the spiking neural network computing unit 15 is 2, the pulse information is processed after two processing cycles.
Fig. 4 is a schematic diagram of another routing packet according to an embodiment of the present invention. As shown in fig. 4, the data payload in the routing packet of the present embodiment is numerical quantity information. Take the length of the routing packet as 32 bits as an example, wherein the numerical quantity information V is located at 0-7 bits, and the address information is located at 8-31 bits. In an alternative implementation, the numerical quantity information may be derived from: (1) and correspondingly outputting the output information of the artificial neuron. (2) And outputting the logic label of the artificial neuron in the artificial neural network computing unit. (3) And outputting the membrane potential of the pulse neuron. (4) And outputting the difference value between the membrane potential and the potential threshold value of the pulse neuron. (5) And outputting the duration of the pulse neuron release. The address information comprises a horizontal direction relative distance Dx, a vertical direction relative distance Dy and an axon storage address A in the corresponding neural network computing unit. The axon storage address A in the corresponding neural network computing unit is located at 16-23 bits in the routing packet, the vertical relative distance Dy is located at 24-31 bits, and the horizontal relative distance Dx is located at 32-39 bits. It should be understood that the length of each information in the routing packet and the position thereof in the routing packet are merely exemplary, and the present embodiment does not limit this.
The following description will take the routing relationship of the artificial neural network computing unit 12, the impulse neural network computing unit 15, and the hybrid neural network computing unit 18 as an example. The calculation mode of the hybrid neural network calculation unit 18 is: the input and the output are numerical quantity information used for the calculation of the artificial neural network. In the present embodiment, each computing unit has a corresponding data processing cycle, for example, processing received data once in 1 us.
In an optional implementation manner, when mapping a given task to be processed into the heterogeneous cooperative system, the artificial neural network computing unit is preconfigured with an upper sending threshold and a lower sending threshold, and when a numerical quantity output by any artificial neuron in the artificial neural network computing unit is within a threshold interval determined by the upper sending threshold and the lower sending threshold, a routing packet is determined and output. Wherein the threshold interval determined by the upper transmission threshold and the lower transmission threshold may include: 1. the lower threshold is sent to the upper threshold. 2. Sending the lower threshold to a minimum value and sending the upper threshold to a maximum value. 3. Sending the lower threshold to the minimum. 4. The upper threshold is sent to the maximum. Therefore, the threshold interval corresponding to each artificial neural network computing unit can be determined according to the given task to be processed.
The numerical quantity output by any artificial neuron in the artificial neural network computing unit 12 is within its corresponding threshold interval (e.g. between sending the lower threshold and sending the upper threshold), and a routing packet is determined and output according to the output information and the corresponding routing table. Specifically, in response to the numerical quantity output by the artificial neuron being within the corresponding threshold interval, the address information and the input type of the impulse neural network computing unit 15 are acquired from the routing table. Optionally, the address information includes the relative distances between the artificial neural network computing unit 12 and the impulse neural network computing unit 15 in the horizontal direction and the vertical direction, and the axon storage address in the impulse neural network computing unit 15. And responding to the input type of the impulse neural network computing unit 15 being impulse information, acquiring impulse state information corresponding to the axon storage address in the impulse neural network computing unit 15 from the routing table. The output routing packet includes address information and a first data payload of the spiking neural network computing unit 15. The first data payload includes pulse state information corresponding to the axon storage address in the impulse neural network computing unit 15. As shown in fig. 1, the routing packet may be directly transmitted to a target neural network computing unit (that is, the impulse neural network computing unit 15), and after the impulse neural network computing unit 15 receives the routing packet, it determines that its own address information matches with the address information in the routing packet, and then acquires the impulse information from the routing packet and stores the impulse information in a corresponding axon storage address, so as to process the impulse information after a subsequent processing cycle begins.
In this embodiment, the receiving of the pulse information uses an Address time Representation (AER), that is, whether the receiving end has the pulse input is represented by whether the receiving end has received the routing packet. In the present embodiment, when the impulse neural network computing unit 15 receives the routing packet from the artificial neural network computing unit 12, it indicates that one impulse is input into the axon storage address in the impulse neural network computing unit 15. Specifically, each time the impulse neural network computing unit 15 receives a routing packet sent to itself, 1 is stored in the corresponding axon storage address to indicate that there is an impulse input, and corresponding impulse state information is stored to perform impulse neural network computation in the corresponding processing cycle, where data in the axon storage address where 1 is not stored is 0.
When the spiking neural network computing unit 15 performs data processing, in response to that the membrane potential of the spiking neuron in the spiking neural network computing unit 15 is greater than a potential threshold, a routing packet is output, or a forced distribution information flag FF in the spiking state information corresponding to the spiking neuron is valid, and a routing packet is output according to the output information and a corresponding routing table. In an alternative implementation, the address information and input type of the hybrid neural network computational unit 18 are retrieved from the routing table in response to the membrane potential of the impulse neuron reaching a potential threshold. In response to the input type of the hybrid neural network computing unit 18 being numerical quantity information, a routing packet containing the numerical quantity information is determined based on the membrane potential of the impulse neuron. That is, the outputted routing packet includes the address information of the hybrid neural network calculating unit 18 and the second data load, and the second data load includes numerical quantity information corresponding to the output information of the impulse neural network calculating unit 15.
In an alternative implementation, the numerical quantity information is internal state information of the corresponding output impulse neuron, for example, a membrane potential of the output impulse neuron, a difference value between the membrane potential of the impulse neuron and a potential threshold value, or a duration of firing of the impulse neuron. When the membrane potential of the output neuron is selected as numerical quantity information, 8bit data obtained by performing saturation interception on the membrane potential of the output neuron at the current moment is used as the numerical quantity information. Optionally, the address information includes the relative distances between the impulse neural network computing unit 15 and the hybrid neural network computing unit 18 in the horizontal direction and the vertical direction, and the axon storage address of the hybrid neural network computing unit 18. As shown in fig. 1, the routing packet may be directly transmitted to a target neural network computing unit (that is, the hybrid neural network computing unit 18), and after the hybrid neural network computing unit 18 receives the routing packet, it determines that the address information of itself matches the address information in the routing packet, and then obtains the numerical quantity information from the routing packet and stores the information in the corresponding axon storage address. When the axon storage address has no corresponding routing packet, the data in the axon storage address is kept as a preset default value, and the hybrid neural network computing unit 18 performs artificial neural network computation on the data in the axon storage address corresponding to each neuron participating in processing in a processing cycle to update the state of each neuron and output a processing result.
In this embodiment, a routing packet including numerical quantity information for artificial neural network calculation or pulse information for pulse neural network calculation is determined according to output information of a calculation unit and a routing table, and is transmitted to a next calculation unit, so that different data types can be transmitted to multiplex routing packets in a uniform format, the cost required by data transmission between calculation units of different types is reduced, further delay time is reduced, data processing efficiency is improved, and meanwhile, the universality and flexibility of a heterogeneous cooperative system are improved.
Fig. 5 is a flowchart of a communication method of a heterogeneous cooperative system according to an embodiment of the present invention. As shown in fig. 5, the communication method of the heterogeneous cooperative system of the present embodiment includes the following steps:
in step S100, the first computing unit periodically receives a first routing packet. The first routing packet comprises first address information and a first data load, and the first data load comprises numerical quantity information used for artificial neural network calculation or pulse information used for pulse neural network calculation.
Step S200, in response to the first address information matching the address information of the first computing unit, the first computing unit determines the input information according to the first data payload. The first address information comprises relative position information of an output computing unit and a target computing unit of the first routing packet and a corresponding axon storage address in the target computing unit. That is, when the first address information matches the address information of the first calculation unit, the first calculation unit is determined to be the target calculation unit. The pulse information comprises pulse state information, a signal flag bit, a forced distribution information flag bit and pulse delay information.
If the type of the first computing unit is a spiking neural network computing unit, or a hybrid neural network computing unit configured as a spiking neural network computing mode, or a hybrid neural network computing unit configured as a first hybrid neural network computing mode, the first data payload is spiking information. Wherein pulse information is obtained from the first data payload, a 1 is stored in the corresponding axon storage address, and corresponding pulse state information is obtained and stored from the first data payload.
If the type of the first computing unit is an artificial neural network computing unit, or a hybrid neural network computing unit configured in an artificial neural network computing mode, or a hybrid neural network computing unit configured in a second hybrid neural network computing mode, the first data payload is numerical quantity information. And acquiring numerical quantity information from the first data load and storing the numerical quantity information into a corresponding axon storage address.
Step S300, processing the input information and determining the output information. In the processing cycle, the first calculation unit processes input information stored in the axon storage address to update the state of each neuron and determines output information. And if the type of the first computing unit is the pulse neural network computing unit, processing the input information which can be processed in the current processing period according to the pulse delay information to determine the output information.
Step S400, determining a second routing packet according to the output information and the routing table. The second routing packet comprises second address information and a second data load, and the type of the second data load is matched with the input type of a second computing unit corresponding to the second address information. For example, if the input type of the second calculation unit corresponding to the second address information is pulse information, the second data load is pulse information. The routing table includes input types, address information and routing connection relations of the computing units participating in data processing.
In an optional implementation manner, when a given task to be processed needs to be executed on the heterogeneous collaborative system, information such as a parameter of the given task to be processed is mapped onto the configuration of each computing unit of the heterogeneous collaborative system through a predetermined mapping tool, and a corresponding routing table is generated. The routing table comprises address information of each computing unit participating in data processing, input types, routing connection relations among the computing units and pulse state information corresponding to axon storage addresses in the computing units. Therefore, when the neural network computing unit determines the routing packet, the position information of the computing unit and the target computing unit (namely, the first computing unit and the second computing unit), the corresponding axon storage address and the input type of the target computing unit can be obtained from the routing table.
If the calculation unit is a spiking neural network calculation unit, or a hybrid neural network calculation unit configured as a spiking neural network calculation mode, or a hybrid neural network calculation unit configured as a first hybrid neural network calculation mode, in an optional implementation manner, when the membrane potential of the output spiking neuron reaches a potential threshold, the second routing packet is determined according to the output information and the routing table. Specifically, first, the input type of the corresponding target computing unit is obtained from the routing table, and when the input type of the target computing unit is numerical quantity information, the determined second routing packet includes address information of the target computing unit and numerical quantity information corresponding to the output information, where the numerical quantity information may be a membrane potential of the output pulse neuron, a difference between the membrane potential of the output pulse neuron and a potential threshold, or a duration of issuance of the output pulse neuron. When the input type of the target computing unit is pulse information, the determined second routing packet comprises address information and pulse information of the target computing unit, wherein the pulse information comprises pulse state information corresponding to an axon storage address in the address information of the target computing unit acquired from the routing table.
If the calculation unit is an artificial neural network calculation unit, a hybrid neural network calculation unit configured as an artificial neural network calculation mode, or a hybrid neural network calculation unit configured as a first hybrid neural network calculation mode, in an alternative implementation, when the numerical quantity of the output artificial neuron is in a threshold interval (for example, between sending the lower threshold and sending the upper threshold), the second routing packet is determined according to the output information and the routing table. Specifically, the input type of the corresponding target computing unit is first obtained from the routing table, and when the input type of the target computing unit is numerical quantity information, the determined second routing packet includes address information of the target computing unit and numerical quantity information corresponding to output information, where the numerical quantity information may be output information of an output artificial neuron or a logic label of the output artificial neuron in the computing unit. When the input type of the target computing unit is pulse information, the determined second routing packet comprises address information and pulse information of the target computing unit, wherein the pulse information comprises pulse state information corresponding to an axon storage address in the address information of the target computing unit acquired from the routing table.
And step S500, sending the second routing packet.
In this embodiment, the input type of the target computing unit is obtained according to the routing table, and the routing packet including the data type matched with the input type of the target computing unit is determined according to the output information of the computing unit, so as to be transmitted to the target computing unit.
In an alternative implementation, when the output of one computing unit is the input of multiple computing units, the communication may be performed in a multicast replication mode. The communication method of the present embodiment further includes:
and updating the address information in the first routing packet into the address information of the second target computing unit according to the routing table, and sending the updated first routing packet. For example, the present calculation unit is an impulse neural network calculation unit 14, and the target calculation units are an artificial neural network calculation unit 12 and an impulse neural network calculation unit 15. The spiking neural network computing unit 14 outputs a first routing packet, where address information in the first routing packet is the spiking neural network computing unit 15. The spiking neural network computing unit 15 receives the first routing packet, acquires the pulse information from the first routing packet, updates the address information in the first routing packet to the address information of the artificial neural network computing unit 12, updates the data load in the first routing packet to numerical quantity information corresponding to the pulse information, and sends the updated first routing packet to the artificial neural network computing unit 12. Therefore, for the same output, the pulse neural network computing unit 14 only needs to generate one routing packet, so that the delay time is reduced, and the data processing efficiency is improved. Meanwhile, the complexity of an algorithm supported by the heterogeneous cooperative system can be increased, and the practicability of the heterogeneous cooperative system is improved.
In an optional implementation manner, the communication method of this embodiment further includes: and responding to the first address information not matched with the address information of the calculation unit, and sending a first routing packet according to the routing table. For example, as shown in fig. 1, if the present computing unit is the spiking neural network computing unit 14 and the target computing unit is the hybrid neural network computing unit 18, the spiking neural network computing unit 14 first sends the routing packet to the spiking neural network computing unit 15 (or the hybrid neural network extreme unit 17), the spiking neural network computing unit 15 (or the hybrid neural network extreme unit 17) determines that the address information in the first routing packet does not match the address information of the present computing unit, and forwards the first routing packet to the hybrid neural network computing unit 18 according to the address information.
Fig. 6 is a process diagram of a communication method of a heterogeneous cooperative system according to an embodiment of the present invention. As shown in fig. 6, the input and output of the computing unit include the following ways:
(1) the output neuron is an artificial neuron, and the target neuron is also an artificial neuron (namely, the output of the calculation unit and the input of the target calculation unit are both numerical quantity information).
The present computing unit obtains the address information and input type (i.e., numerical value information) of the target computing unit from the routing table. The artificial neuron 61 of the calculation unit outputs numerical quantity information and sends the numerical quantity information to a sending module of the calculation unit, a routing packet a is determined when a preset condition is met, the routing packet a is sent to a receiving module of the target calculation unit, and the receiving module acquires the numerical quantity information from the routing packet a and stores the numerical quantity information into an axon storage address corresponding to the target neuron (namely, the artificial neuron 62). The routing packet a comprises address information of a target computing unit and a data load containing numerical quantity information.
(2) The output neuron is an artificial neuron, and the target neuron is a pulse neuron (namely, the output of the calculation unit is numerical value information, and the input of the target calculation unit is pulse information).
The present computation unit obtains the address information and input type (i.e., pulse information) of the target computation unit from the routing table. The artificial neuron 63 of the calculation unit outputs numerical quantity information and sends the numerical quantity information to a sending module of the calculation unit, when the condition is met, pulse state information corresponding to a target calculation unit is obtained from a routing table to determine a routing packet b, the routing packet b is sent to a receiving module of the target calculation unit, the receiving module obtains the pulse information from the routing packet b and stores the pulse information, namely, an axon storage address corresponding to the target neuron (pulse neuron 64) is stored with 1, and corresponding pulse state information is stored.
(3) The output neuron is a pulse neuron, and the target neuron is also a pulse neuron (namely, the output of the calculation unit and the input of the target calculation unit are pulse information).
The present computation unit obtains the address information and input type (i.e., pulse information) of the target computation unit from the routing table. The pulse neuron 65 of the calculation unit outputs pulse information and transmits the pulse information to a transmission module of the calculation unit, when the condition is met, pulse state information corresponding to the target calculation unit is acquired from a routing table to determine a routing packet c, the routing packet c is transmitted to a receiving module of the target calculation unit, the receiving module acquires the pulse information from the routing packet c and stores the pulse information, namely stores 1 in an axon storage address corresponding to the target neuron (pulse neuron 66), and stores the corresponding pulse state information.
(4) The output neuron is a pulse neuron, and the target neuron is an artificial neuron (namely, the output of the calculation unit is pulse information, and the input of the target calculation unit is numerical quantity information).
The present computing unit obtains the address information and input type (i.e., numerical value information) of the target computing unit from the routing table. The pulse neuron 67 of the calculation unit outputs pulse information and sends the pulse information to a sending module of the calculation unit, when the condition is met, a membrane potential or a difference value between the membrane potential and a potential threshold value of the pulse neuron 67 is obtained to determine a routing packet d, the routing packet d is sent to a receiving module of a target calculation unit, and the receiving module obtains numerical quantity information from the routing packet d and stores the numerical quantity information into an axon storage address corresponding to the target neuron (namely, the artificial neuron 68). The routing packet a comprises address information of a target computing unit and a data load containing numerical quantity information.
In this embodiment, the input type of the target computing unit is obtained according to the routing table, and the routing packet including the data type matched with the input type of the target computing unit is determined according to the output information of the computing unit, so as to be transmitted to the target computing unit.
Fig. 7 is another process diagram of a communication method of a heterogeneous cooperative system according to an embodiment of the present invention. As shown in fig. 7, the heterogeneous collaborative system includes a calculation unit 71 and a calculation unit 72. Wherein the calculation unit 71 comprises a receiving module 711, a transmitting module 712 and an artificial neuron 713. The calculation unit 72 receives a module 721, a transmission module 722 and a pulse neuron 723. The calculation unit 71 and the calculation unit 72 are both target calculation units of data in the route packet X. As shown in fig. 7, the receiving module 711 in the calculating unit 71 receives the routing packet X and obtains the numerical quantity information therein to send to the artificial neuron 713, and the receiving module 711 sends the routing packet X to the sending module 712. The sending module 712 obtains the address information of the calculating unit 72 and the input type (i.e. the pulse information) of the calculating unit 72 from the corresponding routing table, updates the address information in the routing packet X to the address information of the calculating unit 72, updates the numerical quantity information in the data load to the corresponding pulse information, and generates and sends the routing packet X'. The pulse information includes pulse state information corresponding to the calculating unit 72 obtained from the routing table. It is easily understood that the data information in the routing packet X and the routing packet X' are the same. The receiving module 721 in the computing unit 72 receives the routing packet X ', acquires the pulse information in the routing packet X', and sends the pulse information to the pulse neuron 723. If there are other destination computing units for the routing packet X, the sending unit in the computing unit 72 may update the address information in the routing packet X 'to the address information of the other destination computing units, update the information in the data payload to the input information corresponding to the input type of the other destination computing units, and send the updated routing packet X'.
In this embodiment, when the output computing unit corresponds to multiple target computing units, only one routing packet needs to be generated for the same output of the output computing unit, which reduces the delay time and improves the data processing efficiency. Meanwhile, the complexity of an algorithm supported by the heterogeneous cooperative system can be increased, and the practicability of the heterogeneous cooperative system is improved.
FIG. 8 is a device diagram of a computing unit according to an embodiment of the invention. As shown in fig. 8, the calculation unit 8 of the present embodiment includes a receiving module 81, an obtaining module 82, a processing module 83, a determining module 84, and a sending module 85.
Wherein the receiving module 81 is configured to receive a first routing packet, the first routing packet comprising first address information and a first data payload, the first data payload comprising numerical quantity information for an artificial neural network or impulse information for an impulse neural network calculation. The pulse information comprises pulse state information, and the pulse state information comprises a strong suppression signal flag bit, a forced release information flag bit and pulse delay information. The strong suppression signal flag bit is used for suppressing pulse information, and the forced distribution information flag bit is used for forcibly generating a routing packet. The pulse state information is pre-stored in a routing table, and the routing table is obtained according to the task to be processed.
The retrieving module 82 is configured to determine input information from the first data payload in response to the first address information matching address information of the computing unit.
The processing module 83 is configured to process the input information and determine output information. When the computing unit is an artificial neural network computing unit, the processing module 83 includes a plurality of artificial neurons. When the calculation unit 8 is a spiking neural network calculation unit, the processing module 83 includes a plurality of spiking neurons. When the calculation unit 8 is a hybrid neural network calculation unit, the processing module 83 includes a plurality of artificial neurons and a plurality of impulse neurons.
The determining module 84 is configured to determine a second routing packet according to the output information and the routing table, where the second routing packet includes a second address and a second data payload, and a type of the second data payload matches an input type of a computing unit corresponding to the second address information. Wherein the routing table comprises input types, addresses and routing connection relations of the computing units.
The transmitting module 85 is configured to transmit the second data packet.
In an alternative implementation, the output information of the computing unit is a numerical quantity, and the determining module 84 is further configured to:
responding to the numerical quantity within a threshold value interval, and acquiring second address information and the input type of a corresponding computing unit from the routing table;
responding to the input type of the corresponding computing unit as pulse information, and acquiring corresponding pulse state information from the routing table;
determining the second routing packet according to the second address information and the corresponding pulse state information, wherein a second data load in the second routing packet is pulse information; wherein the pulse information includes the pulse state information.
In an alternative implementation, the output information of the computing unit is a numerical quantity, and the determining module 84 is further configured to:
acquiring second address information and an input type of the second calculation unit from the routing table in response to the numerical quantity information being within a threshold interval;
and in response to the input type of the second computing unit being numerical quantity information, determining the second routing packet according to the second address information and the numerical quantity, wherein a second data load in the second routing packet is the numerical quantity information.
In an alternative implementation, the output information of the computing unit is a pulse, and the determining module 84 is further configured to:
responding to the membrane potential of the pulse neuron reaching a potential threshold value, and acquiring second address information and the input type of a corresponding calculation unit from the routing table;
responding to the input type of the corresponding computing unit as pulse information, and acquiring corresponding pulse state information from the routing table;
determining the second routing packet according to the second address information and the corresponding pulse state information, wherein a second data load in the second routing packet is pulse information; wherein the pulse information includes pulse state information.
In an alternative implementation, the output information of the computing unit is a pulse, and the determining module 84 is further configured to:
acquiring second address information and an input type of the second calculation unit from the routing table in response to the membrane potential of the pulse neuron reaching a potential threshold;
acquiring corresponding numerical quantity information according to the membrane potential of the pulse neuron, wherein the numerical quantity information is the membrane potential of the pulse neuron or the difference value between the membrane potential of the pulse neuron and the potential threshold;
and determining the second routing packet according to the second address information and the numerical value information, wherein a second data load in the second routing packet is the numerical value information.
In an alternative implementation, the calculation unit 8 further comprises an updating unit 86. The updating unit 86 is configured to update the first address information and the data type of the data payload, and send the updated first routing packet.
In this embodiment, the input type of the target computing unit is obtained according to the routing table, and the routing packet including the data type matched with the input type of the target computing unit is determined according to the output information of the computing unit, so as to be transmitted to the target computing unit.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A communication method for a heterogeneous collaborative system, the system comprising at least two computing units, the method comprising:
the method comprises the steps that a first computing unit periodically receives a first routing packet, wherein the first routing packet comprises first address information and a first data load, the first data load is matched with an input type of the first computing unit, and the first data load is numerical quantity information used for artificial neural network computing or pulse information used for pulse neural network computing;
in response to the first address information matching the address information of the first computing unit, the first computing unit determining input information from the first data payload;
processing the input information and determining output information;
determining a second routing packet according to the output information and a routing table, wherein the second routing packet comprises second address information and a second data load, and the type of the second data load is matched with the input type of a second computing unit corresponding to the second address information; the routing table comprises an input type, address information and a routing connection relation of a computing unit;
and sending the second routing packet.
2. The method of claim 1, wherein the at least two computational units comprise at least one artificial neural network computational unit and at least one impulse neural network computational unit; or
The at least two computing units comprise at least two hybrid neural network computing units which simultaneously support artificial neural network computing and impulse neural network computing; or
The at least two computing units include at least one artificial neural network computing unit, at least one impulse neural network computing unit, and at least one hybrid neural network computing unit.
3. The method of claim 1, wherein the output information of the first computing unit is a numerical quantity, and wherein determining the second routing packet based on the output information and the routing table comprises:
acquiring second address information and an input type of the second computing unit from the routing table in response to the numerical quantity being within a threshold interval;
responding to the input type of the second calculation unit as pulse information, and acquiring corresponding pulse state information from the routing table;
determining the second routing packet according to the second address information and the corresponding pulse state information, wherein a second data load in the second routing packet is pulse information; wherein the pulse information includes the pulse state information.
4. The method of claim 1, wherein the output information of the first computing unit is a numerical quantity, and wherein determining the second routing packet based on the output information and the routing table further comprises:
acquiring second address information and an input type of the second calculation unit from the routing table in response to the numerical quantity information being within a threshold interval;
and in response to the input type of the second computing unit being numerical quantity information, determining the second routing packet according to the second address information and the numerical quantity, wherein a second data load in the second routing packet is the numerical quantity information.
5. The method of claim 1, wherein the output information of the first computing unit is a pulse, and wherein determining the second routing packet based on the output information and the routing table comprises:
acquiring second address information and an input type of the second calculation unit from the routing table in response to the membrane potential of the pulse neuron reaching a potential threshold;
responding to the input type of the second calculation unit as pulse information, and acquiring corresponding pulse state information from the routing table;
determining the second routing packet according to the second address information and the corresponding pulse state information, wherein a second data load in the second routing packet is pulse information; wherein the pulse information includes pulse state information.
6. The method of claim 1, wherein the output information of the first computing unit is a pulse, and wherein determining the second routing packet based on the output information and the routing table further comprises:
acquiring second address information and an input type of the second calculation unit from the routing table in response to the membrane potential of the pulse neuron reaching a potential threshold;
responding to the input type of the second computing unit as numerical quantity information, and acquiring corresponding numerical quantity information according to the membrane potential of the pulse neuron, wherein the numerical quantity information is the membrane potential of the pulse neuron or the difference value between the membrane potential of the pulse neuron and the potential threshold;
and determining the second routing packet according to the second address information and the numerical value information, wherein a second data load in the second routing packet is the numerical value information.
7. The method according to any one of claims 3 or 5, wherein the pulse information includes a strong suppressed signal flag bit, a forced release information flag bit, and pulse delay information;
the strong suppression signal flag bit is used for suppressing pulse information, and the forced distribution information flag bit is used for forcibly generating the second routing packet.
8. A heterogeneous collaborative system, wherein the system comprises at least two computing units;
the calculation unit includes:
a receiving module configured to periodically receive a first routing packet, where the first routing packet includes first address information and a first data payload, and the first data payload is numerical quantity information for an artificial neural network or impulse information for an impulse neural network calculation;
an acquisition module configured to determine input information from the first data load in response to the first address information matching address information of the computing unit;
a processing module configured to process the input information and determine output information;
a determining module configured to determine a second routing packet according to the output information and a routing table, where the second routing packet includes second address information and a second data payload, and a type of the second data payload matches an input type of a computing unit corresponding to the second address information; the routing table comprises an input type, an address and a routing connection relation of a computing unit;
a transmitting module configured to transmit the second data packet.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the program is executed by a processor to implement the method of any of claims 1-7.
CN201910488020.0A 2019-06-05 2019-06-05 Heterogeneous cooperative system and communication method thereof Active CN110188872B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910488020.0A CN110188872B (en) 2019-06-05 2019-06-05 Heterogeneous cooperative system and communication method thereof
PCT/CN2020/090562 WO2020244370A1 (en) 2019-06-05 2020-05-15 Heterogeneous cooperative system and communication method therefor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910488020.0A CN110188872B (en) 2019-06-05 2019-06-05 Heterogeneous cooperative system and communication method thereof

Publications (2)

Publication Number Publication Date
CN110188872A CN110188872A (en) 2019-08-30
CN110188872B true CN110188872B (en) 2021-04-13

Family

ID=67720596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910488020.0A Active CN110188872B (en) 2019-06-05 2019-06-05 Heterogeneous cooperative system and communication method thereof

Country Status (2)

Country Link
CN (1) CN110188872B (en)
WO (1) WO2020244370A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188872B (en) * 2019-06-05 2021-04-13 北京灵汐科技有限公司 Heterogeneous cooperative system and communication method thereof
CN112650705A (en) * 2020-12-31 2021-04-13 清华大学 Routing control method and artificial intelligence processor
CN113723594B (en) * 2021-08-31 2023-12-05 绍兴市北大信息技术科创中心 Pulse neural network target identification method
CN114998996B (en) * 2022-06-14 2024-04-05 中国电信股份有限公司 Signal processing method, device and equipment with motion attribute information and storage
CN116015951B (en) * 2022-12-31 2023-08-29 北京天融信网络安全技术有限公司 Time object matching method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095961A (en) * 2015-07-16 2015-11-25 清华大学 Mixing system with artificial neural network and impulsive neural network
CN105095966A (en) * 2015-07-16 2015-11-25 清华大学 Hybrid computing system of artificial neural network and impulsive neural network
CN105095965A (en) * 2015-07-16 2015-11-25 清华大学 Hybrid communication method of artificial neural network and impulsive neural network
CN109491956A (en) * 2018-11-09 2019-03-19 北京灵汐科技有限公司 A kind of isomery cooperated computing system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056212B (en) * 2016-05-25 2018-11-23 清华大学 A kind of artificial neural networks core
US11501131B2 (en) * 2016-09-09 2022-11-15 SK Hynix Inc. Neural network hardware accelerator architectures and operating method thereof
CN110188872B (en) * 2019-06-05 2021-04-13 北京灵汐科技有限公司 Heterogeneous cooperative system and communication method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095961A (en) * 2015-07-16 2015-11-25 清华大学 Mixing system with artificial neural network and impulsive neural network
CN105095966A (en) * 2015-07-16 2015-11-25 清华大学 Hybrid computing system of artificial neural network and impulsive neural network
CN105095965A (en) * 2015-07-16 2015-11-25 清华大学 Hybrid communication method of artificial neural network and impulsive neural network
CN109491956A (en) * 2018-11-09 2019-03-19 北京灵汐科技有限公司 A kind of isomery cooperated computing system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Artificial Neural Network (ANN) to Spiking Neural Network (SNN) Converters Based on Diffusive Memristors;Rivu Midya,et al.;《Adv. Electron. Mater.》;20190329;全文 *

Also Published As

Publication number Publication date
CN110188872A (en) 2019-08-30
WO2020244370A1 (en) 2020-12-10

Similar Documents

Publication Publication Date Title
CN110188872B (en) Heterogeneous cooperative system and communication method thereof
Saadatpour et al. A reduction method for Boolean network models proven to conserve attractors
CN105729491B (en) The execution method, apparatus and system of robot task
CN106056212B (en) A kind of artificial neural networks core
JP4456170B2 (en) Network processing improvements
EP2359542B1 (en) A method and apparatus for routing data
CN108696453B (en) Lightweight SDN service flow notification method and system
CN113285875B (en) Space route prediction method based on impulse neural network
CN112511457B (en) Data stream type identification method and related equipment
CN109246027B (en) Network maintenance method and device and terminal equipment
CN106845633B (en) Neural network information conversion method and system
CN113518035B (en) Route determining method and device
CN108111335A (en) A kind of method and system dispatched and link virtual network function
WO2020188794A1 (en) Video system, imaging device, and video processing device
CN113114581A (en) TCP congestion control method and device based on multi-agent deep reinforcement learning
CN110213165B (en) Heterogeneous cooperative system and communication method thereof
CN112087329A (en) Network service function chain deployment method
CN114330722A (en) Inference implementation method, network, electronic device and storage medium
Wang et al. A reinforcement learning approach for online service tree placement in edge computing
CN110601916A (en) Flow sampling and application sensing system based on machine learning
Yang et al. Model predictive control for cloud-integrated networked multiagent systems under bandwidth allocation
Akselrod et al. Information flow control for collaborative distributed data fusion and multisensor multitarget tracking
CN110601909B (en) Network maintenance method and device, computer equipment and storage medium
JP7135468B2 (en) Distributed processing system and distributed processing method
Doe et al. DSORL: Data Source Optimization With Reinforcement Learning Scheme for Vehicular Named Data Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant