WO2022078334A1 - 利用神经元模型及网络处理信号的处理方法、介质、设备 - Google Patents

利用神经元模型及网络处理信号的处理方法、介质、设备 Download PDF

Info

Publication number
WO2022078334A1
WO2022078334A1 PCT/CN2021/123314 CN2021123314W WO2022078334A1 WO 2022078334 A1 WO2022078334 A1 WO 2022078334A1 CN 2021123314 W CN2021123314 W CN 2021123314W WO 2022078334 A1 WO2022078334 A1 WO 2022078334A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
neuron model
processing
signal
network unit
Prior art date
Application number
PCT/CN2021/123314
Other languages
English (en)
French (fr)
Inventor
赵蓉
杨哲宇
施路平
王韬毅
何伟
祝夭龙
Original Assignee
北京灵汐科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京灵汐科技有限公司 filed Critical 北京灵汐科技有限公司
Publication of WO2022078334A1 publication Critical patent/WO2022078334A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Definitions

  • Embodiments of the present disclosure relate to artificial intelligence technologies, and in particular, to a method for processing signals using a neuron model, a method for processing signals using a neural network, a computer-readable storage medium, and an electronic device.
  • ANN Artificial Neural Network, artificial neural network
  • SNN Spiking Neuron Networks, spiking neural network
  • a certain type of neural network unit can only process signals of a certain data format.
  • a complex processing function such as retinal imaging
  • Various neural network units provided in the prior art cannot cope with the above-mentioned complex data processing requirements.
  • Embodiments of the present disclosure provide a signal processing method using a neuron model, a signal processing method using a neural network, a computer-readable storage medium, and an electronic device, so as to detect complex neuron functions
  • the mixed processing of signals of various data formats is realized.
  • a method for processing signals using a neuron model wherein the neuron model includes at least two independent neural network units, each of which is used to process different A signal in a data format; the processing method includes:
  • the input signal is processed using the neural network unit capable of processing the input signal.
  • a signal processing method using a neural network wherein the neural network includes at least one simulation layer, the simulation layer includes a data format conversion device and at least one neuron model, the neural network
  • the meta-model includes at least two independent neural network units, each of which is used to process signals of different data formats;
  • the processing method for processing signals using a neural network includes:
  • the processing method provided by the first aspect of the present disclosure is used to process the corresponding input signal, wherein the initial information is used to generate the input signal matching the coding mode corresponding to each neuron model.
  • the encoding mode of the initial information does not match the encoding mode corresponding to the neuron model processing the initial information, use a data format conversion device to convert the initial information into an encoding type corresponding to the neuron model processing the initial information
  • the encoding method matches the input signal
  • the initial information is used as the input signal of the neuron model processing the initial information.
  • a signal processing apparatus for processing signals using a neuron model includes:
  • a neuron model storage module stores a neuron model, the neuron model includes at least two independent neural network units, and each of the neural network units is respectively used for processing signals of different data formats ;
  • the input signal receiving module is used for receiving the input signal
  • a target determination module which is configured to determine a neural network unit capable of processing the input signal according to the input signal
  • a gating device is respectively connected with each of the neural network units, and the gating device is used for connecting the input signal receiving module with the target determining module determined by the target determination module and capable of processing the input signal.
  • the neural network unit is gated to process the input signal with the neural network unit.
  • a computer-readable storage medium on which a neuron model and an executable program are stored, the neuron model including at least two independent neural network units, each of the neural network The units are respectively used to process signals of different data formats, and when the executable program is called, the processing method described in the first aspect of the present disclosure can be implemented.
  • a computer-readable storage medium on which a neural network and an executable program are stored
  • the neural network includes at least one simulation layer, the simulation layer includes a data format conversion device and at least one neuron model, and the neuron model includes at least two independent neural network units, each of which is used for Process signals of different data formats;
  • the processing method provided by the second aspect of the present disclosure can be performed.
  • an electronic device comprising:
  • processors one or more processors
  • One or more I/O interfaces connected between the processor and the memory, are configured to realize the information interaction between the processor and the memory.
  • an electronic device comprising:
  • processors one or more processors
  • One or more I/O interfaces connected between the processor and the memory, are configured to realize the information interaction between the processor and the memory.
  • the neuron model includes at least two independent neural network units, each of the neural network units is used to process signals of different data formats, and Among the neural network units of the neuron model, only one neural network unit is gated.
  • a large-scale neuron topology structure that can satisfy the neuron processing function can be constructed by gating control, which solves the technical defect that the existing technology cannot simulate the hybrid processing of heterogeneous information by neurons.
  • a new form of heterogeneous fusion of neural network hybrid operation is proposed, and the structure of the existing neuron model is optimized. It meets people's growing demand for the construction of personalized and convenient neuron topology.
  • Embodiment 1 is a flowchart of a processing method provided by Embodiment 1 of the present disclosure
  • FIG. 2 is a schematic diagram of a neuron model in Embodiment 1 of the present disclosure
  • step S120 is a flowchart of an embodiment of step S120
  • FIG. 4 is a schematic block diagram of a processing device in Embodiment 2 of the present disclosure.
  • FIG. 5 is a schematic block diagram of another processing device in Embodiment 2 of the present disclosure.
  • FIG. 6 is a schematic block diagram of still another processing device in Embodiment 2 of the present disclosure.
  • FIG. 7 is a schematic diagram of a simulation layer in Embodiment 3 of the present disclosure.
  • FIG. 9 is a schematic diagram of the topology structure of the simulation layer involved in Embodiment 3 of the present disclosure.
  • FIG. 10 is a schematic diagram of another simulation layer in Embodiment 3 of the present disclosure.
  • FIG. 11 is a schematic diagram of another simulation layer in Embodiment 3 of the present disclosure.
  • FIG. 12 is a schematic diagram of a retinal neuron topology in Embodiment 3 of the present disclosure.
  • FIG. 13 is a schematic diagram of a specific retinal neuron topology in Embodiment 3 of the present disclosure.
  • FIG. 1 provides a method for processing signals using a neuron model according to Embodiment 1 of the present disclosure.
  • the neuron model includes at least two independent neural network units (shown as a neural network in FIG. 2 ). unit 1, neural network unit 2, ..., neural network unit n). Each of the neural network units is used to process signals of different data formats.
  • the processing method includes:
  • step S110 receiving an input signal
  • step S120 a neural network unit capable of processing the input signal is determined according to the input signal
  • step S130 the input signal is processed by the neural network unit capable of processing the input signal.
  • one of the neural network units of the neuron model is selected.
  • the neural network unit specifically refers to a minimum computing unit used to simulate the computing function of neurons, and the neural network unit may be implemented by software and/or hardware.
  • the specific Signals in a certain data format can be processed.
  • the data format may include: an analog signal, a pulse signal, a digital level signal, and the like.
  • different neural network units are used to process signals of different data formats.
  • signals of different data formats correspond to different calculation precisions or signal encoding methods.
  • the neural network unit A is used to process analog signals, the neural network unit A can process and transmit signals with higher precision, but because the neural network unit A cannot encode the analog signals, the neural network unit A
  • the form of computing tasks that can be processed will be relatively single, and the application scenarios will be limited; the neural network unit B is used to process pulse signals, so the neural network unit B can be suitable for processing signals in rich encoding forms, such as time domain encoding. Or spatiotemporal coding, etc.
  • the application scenarios of the neural network unit B are relatively wide, but the signal accuracy is not high.
  • the neural network when a specific type of neural network is constructed, it is generally obtained by combining neural network units that process signals in a certain data format. Therefore, the neural network can only process a specific data. format signal.
  • the inventor creatively proposes a new structure of a neuron model.
  • the neuron model also includes multiple independent neural network units.
  • the multiple neural network units are respectively used to process signals of different data formats. In actual use, the data formats of the signals can be processed according to actual needs.
  • the neural network unit matching the input signal is selected, and by using the neuron topology structure obtained by combining the above-mentioned multiple neural network units, the mixed processing of signals of different data formats can be realized. .
  • the neuron model may have multiple input terminals and multiple matching output terminals, and different input terminals and output terminals may be selected to correspond to different neural network units; or, the neuron model may have There is a single input terminal and a single output terminal, and there is a gate switch inside the neuron model. Through the gate switch, the gated neural network unit can be selected to be connected to the single input terminal and the single input terminal in the neuron model. output.
  • different output terminals can be configured according to different neural network units selected by the neuron model or different signal processing requirements.
  • the output of the neural network unit can be directly used as the output of the neuron model; if the neuron model is gated to process the pulse signal Neural network unit, and when only a single pulse signal processed by the neuron model is needed in the future, you can choose to directly use the output of the neural network unit as the output of the neuron model, or, if the neuron model is gated for A neural network unit that processes impulse signals, and when only multiple impulse signals processed by the neuron model are needed, you can choose to use all (or multiple) outputs of the last hidden layer of the neural network unit as the neuron
  • the output end of the model, etc., is not limited in this embodiment.
  • the neuron model includes at least two independent neural network units, each of the neural network units is used to process input signals of different data formats, and In each of the neural network units of the neuron model, only one neural network unit is gated.
  • a large-scale neuron topology structure that can satisfy the neuron processing function can be constructed by gating control, which solves the technical defect that the existing technology cannot simulate the hybrid processing of heterogeneous information by neurons.
  • a new form of heterogeneous fusion of neural network hybrid operation is proposed, and the structure of the existing neuron model is optimized. It meets people's growing demand for the construction of personalized and convenient neuron topology.
  • the neural network unit in the neuron model may include: a first neural network unit and a second neural network unit;
  • the first neural network unit is used to process and output the analog signal; the second neural network unit is used to process and output the pulse signal.
  • the neural network provided by the embodiments of the present disclosure can be uniformly used.
  • the method of meta-model can construct a heterogeneous fusion neural network that can process analog signals and pulse signals in a mixed manner, so that the heterogeneous fusion neural network has greater signal accuracy when processing analog signals.
  • the pulse signal is used, there are more coding modes to choose from.
  • the first neural network unit may be an artificial neural network (ANN) unit
  • the second neural network unit may be a spiking neural network (SNN) unit.
  • the ANN unit directly processes the analog information and transmits activations with high precision. They show great capabilities in scenarios that require high-precision computing, but involve huge computing operations, resulting in large power consumption and large latency.
  • the SNN unit memorizes historical time information through intrinsic neuron dynamics and feeds the code information into a sequence of digital spikes, enabling event-driven computing.
  • the input data usually contains temporal information, and the data stream is sparse. Therefore, SNN units have richer information encoding methods than ANN units, such as temporal coding, temporal and spatial coding, group coding, Bayesian coding, time-delay coding, and sparse coding.
  • the SNN unit Due to the integrated spatiotemporal encoding, the SNN unit has great potential in dealing with complex spatiotemporal information and multimodal information tasks.
  • the discontinuity of SNN units, the complexity of spatio-temporal coding, and the uncertainty of the network structure make it difficult to describe the entire network mathematically, and it is difficult to construct an effective and general supervised learning algorithm. Therefore, SNN units usually have limited computation and low energy consumption, but the accuracy is not high.
  • the ANN unit and the SNN unit have complementary advantages and are suitable for different application scenarios. Furthermore, by using the neuron model provided by the embodiments of the present disclosure, the application scenarios that comprehensively use the advantages of the ANN unit and the SNN unit can be adapted.
  • step S120 may include:
  • step S121 the routing information included in the input signal is parsed
  • a neural network unit whose identifier of the neural network unit is consistent with the identifier of the neural network unit carried in the routing information is determined as a neural network unit capable of processing the input signal.
  • the routing information may be generated using the neural network unit identification of the neural network unit capable of processing the input signal.
  • the neural network unit of the first neural network unit is identified as 0001, and the neural network unit of the second neural network unit is identified as 0010.
  • the first neural network unit is a neural network unit capable of processing the input signal. Then, when encoding the input signal, "0001" is used for encoding to generate routing information.
  • the neuron model receives the input signal, it parses the routing information carried in it, and obtains the neural network unit identifier "0001", and then associates "0001" with the neural network unit identifier 0001 of the first neural network unit, and The neural network unit identifier 0010 of the second neural network unit is compared, the neural network unit identifier carried in the routing information is consistent with the neural network unit identifier of the first neural network unit, and finally the first neural network unit is determined to be able to process the input signal. the neural network unit.
  • multiple neuron models can be used for networking to obtain a neuron network, and routing information is added to each input signal sent to the neuron network.
  • the selection information indicates the sequence of neuron models that each input signal needs to pass through, and the neural network units (ie, neural network units capable of processing input signals) that need to be gated for each neuron model passed through.
  • the neural network unit to be gated can be simply and conveniently determined.
  • FIG. 4 provides a signal processing apparatus for processing signals using a neuron model according to Embodiment 2 of the present disclosure.
  • the signal processing apparatus includes a neuron model storage module 210 , an input signal receiving module 220 , and a target determination module. module 230 and gating device 240 .
  • the neuron model storage module 210 stores the neuron model described in the first embodiment, that is, the neuron model includes at least two independent neural network units, each of which is used to process different data formats. signal of.
  • the input signal receiving module 220 is configured to perform step S110, that is, the input signal receiving module 220 is configured to receive the input signal.
  • the target determination module 230 is configured to determine a neural network unit capable of processing the input signal according to the input signal.
  • a gating device 240, the gating device 240 is respectively connected with each of the neural network units, and the gating device 240 is used to connect the input signal receiving module and the target determining module and can process the input determined by the The neural network unit of the signal is gated to process the input signal with the neural network unit.
  • the processing apparatus is used to execute the processing method provided in Embodiment 1 of the present disclosure.
  • the advantages and beneficial effects of the processing method have been described in detail above, and will not be repeated here.
  • FIG. 5 A block diagram of a processing device is shown in FIG. 5 .
  • the neuron model includes two neural network units.
  • the two neural network units are referred to as a first neural network unit 211 and a second neural network unit 212, respectively.
  • the gating device 240 is connected to the first neural network unit 211 and the second neural network unit 212, respectively, to indicate that the gating device 240 can select one of the first neural network unit 211 and the second neural network unit 212. as the neural network unit that processes the input signal.
  • the neuron model may have only a single input terminal and a single output terminal, and inside the neuron model, the single input terminal is connected to the input terminal of the gating device 240 , the gating device The output terminal of 240 is connected to the input terminal of the first neural network unit 211 and the second neural network unit 212 at the same time, and the output terminals of the first neural network unit 211 and the second neural network unit 212 are respectively connected to the single output terminal.
  • the gating device 240 may be a manual control device (for example, a DIP switch or a touch switch, etc.) or a program-controlled device (for example, a program-controlled switch or a multi-select data selector, this embodiment is a two-to-one data selector), etc., which are not limited in the present disclosure.
  • a gating device the channel selected by the gating device can be controlled by manual control or by means of controller control, so that the neuron model can gating required neural network units.
  • the technical solution of the embodiments of the present disclosure simplifies the external connection of the neuron model by setting a gating device inside the neuron model, and can be interconnected with other neuron models only through a single input terminal and a single output terminal. Furthermore, by simply controlling the gating devices in the interconnected neuron models, various required neuron topologies can be flexibly obtained.
  • the gating device may be a program-controlled device, and the processing device may further include: a gating control device 250, the gating control device 250 is connected to the gating device 240;
  • the gating control device 250 is configured to send a gating control instruction matching the neural network unit capable of processing the input signal to the gating device according to the neural network unit capable of processing the input signal.
  • the gating device 240 is specifically configured to, according to the gating control instruction, gating a neural network unit capable of processing the input signal in the neuron model.
  • the gating device 240 can be controlled by means of program control, so as to further improve the intelligence and versatility of the entire neuron model.
  • the gating controller 250 may be used to perform gating control on the gating device 240 .
  • the gating control device 250 may be a control chip, and the gating controller 250 may determine the neural network unit that needs to be gated currently according to the control instruction (wired and/or wirelessly received) input by the user. Or directly determine the neural network unit capable of processing the input signal input into the neuron model according to the output result of the target determination module 230 .
  • the gating controller 250 can then determine the gating control matching the neural network unit capable of processing the input signal according to the preset corresponding relationship, that is, the corresponding relationship between the neural network unit that needs to be gated and the gating control instruction. instruction, and send the gating control instruction to the gating device 240 .
  • the processing device may further include: a wireless transceiver module, the wireless transceiver module is connected to the gating control device 250, and the gating control device 250 can be based on the control instructions received by the wireless transceiver module, or input to the neural network.
  • the input signal in the meta-model which determines the neural network units that need to be gated.
  • the advantage of setting the gating control device 250 is that the neuron topology constructed by multiple neuron models can be dynamically adjusted through a simple program control operation, which further expands the usage scenarios of the neuron models.
  • FIG. 6 is a schematic diagram of the processing apparatus provided in Embodiment 3 of the present disclosure. As shown in FIG. 6 , in this embodiment, the gating control device 250 is connected to the input end of the neuron model.
  • the gating control device 250 is used to analyze the routing information included in the input signal, and accordingly, the target determination module (not shown) is used to determine the neural network to be gated according to the routing information. unit. Further, the gating control device 250 sends a gating control instruction matching the neural network unit to be gated to the gating device 240 . Further, the gating device 240 is specifically configured to, according to the gating control instruction, gating the corresponding neural network unit in the neuron model, and correspondingly send the input signal to the gated neural network unit ( That is, one of the first neural network unit 211 and the second neural network unit 212).
  • multiple neuron models can be used for networking to obtain a neuron network, and routing information is added to each input signal sent to the neuron network.
  • the routing information indicates the order of the neuron models that each input signal needs to pass through, and the target neural network unit that needs to be gated for each neuron model passed through.
  • the target neural network unit to be gated can be simply and conveniently determined by analyzing the routing information included in the input signal.
  • FIG. 7 is a schematic diagram of a neuron topology structure of a neural network in Embodiment 4 of the present disclosure.
  • the neural network includes at least one simulation layer, and the simulation layer includes a data format converter 420 and at least one simulation layer.
  • the neuron model is the neuron model provided in the first to third disclosed embodiments, that is, the neuron model includes at least two independent neural network units, and each of the neural network units is respectively used for processing different data formats. Signal.
  • this embodiment provides a method for processing a signal by using a neural network, and the method includes:
  • step S210 use the initial information to generate an input signal that matches the coding mode corresponding to each neuron model
  • each neuron model is used to process the corresponding input signal by using the processing method provided by the first embodiment of the present disclosure.
  • step S210 using the initial information to generate an input signal that matches the coding mode corresponding to each neuron model (ie, step S210 ) includes:
  • step S211 when the encoding method of the initial information does not match the encoding method corresponding to the neuron model processing the initial information, use a data format conversion device to convert the initial information into an encoding type and process the initial information
  • the neuron model corresponding to the encoding method matches the input signal
  • step S212 when the encoding method of the initial information matches the encoding method corresponding to the neuron model processing the initial information, the initial information is used as the input signal of the neuron model processing the initial information.
  • the specific number of neuron models included in the simulation layer of the neural network is not particularly limited.
  • the simulation layer may include one neuron model, or may include multiple neuron models.
  • Any neuron model can process data according to a processing method provided by an embodiment of the present disclosure. Therefore, the method provided by this embodiment also has the advantage of performing mixed processing on signals in multiple data formats.
  • the neural network provided in this embodiment has a neuron topology to which the neuron models provided by the various embodiments of the present disclosure are applicable. Implementation of hybrid processing.
  • FIG. 9 Shown in FIG. 9 is a schematic diagram of the topology of an analog layer in the neural network.
  • different neuron models in the simulation layer can transmit signals in pairs by means of direct connection or relay.
  • a data format conversion device may be connected to locally form a neuron topology.
  • the specific type and source of "initial information” are not particularly limited.
  • the "initial information” can be input externally or output by other neuron models.
  • the neuron model directly connected to the input end of the neural network its initial information is the information input from the outside of the neural network.
  • the initial information carries routing information.
  • the routing information includes: a routing neuron model identifier, and an identifier of a neural network unit gated in the routing neuron model.
  • routing neuron model refers to each neuron model in the path indicated by the routing information.
  • the routing information includes a plurality of routing neuron model identifiers formed in sequence, that is, a routing neuron model identifier sequence, and the routing neuron model identifier sequence specifies which neuron models the input signal needs to be input in sequence. (that is, the routing neuron model), and further indicate through the neural network unit identifier which neural network unit needs to be selected to process the network input signal for each routing neuron model.
  • the current signal processing position can be identified in the routing neuron model identification sequence and output, so that the neuron model that receives the processed signal
  • the neuron model of the next hop of the signal and the selected neural network unit in the neuron model of the next hop can be quickly determined.
  • each neuron model may receive the network input signal (or the network input signal that has been processed by other neuron models).
  • the signals are collectively referred to as the target input signal.
  • the routing information can be parsed from the target input signal, and then the neuron model of the next hop and the neuron model of the next hop can be obtained.
  • the selected neural network unit in .
  • a neuron model identifier can be set for each neuron model, so that each neuron model can determine whether the neuron model of the next hop is its own after parsing the routing neuron model identifier included in the routing information. .
  • the simulation layer includes multiple neuron models.
  • different types of neural networks can be selected in different neuron models. unit.
  • the following takes the simulation layer shown in FIG. 10 including the first neuron model 411 and the second neuron model 412 as an example for introduction.
  • the neural network unit A In the first neuron model 411, the neural network unit A is gated, and the neural network unit A is used to process the signal of the first data format; in the second neuron model 412, the neural network unit B is gated, and the neural network unit B is used for The signal of the second data format is processed. In this way, when the simulation layer processes data, it has both the advantages of the neural network unit A and the advantages of the neural network B.
  • a data format conversion device 420 is introduced to realize the conversion of signals of different data formats.
  • different types of data format conversion devices 420 can be selected according to the specific forms of the first data format and the second data format, for example, pulse signal conversion An analog signal converter or an analog signal-to-pulse signal converter, etc.; or, different types of data format conversion units may be preconfigured in the unified data format conversion device 420, and according to the specific forms of the first data format and the second data format , a data format conversion unit for strobe matching, etc., which are not limited in this embodiment.
  • the multiple neuron models can be networked in a preset manner, and the two directly connected neuron models are connected through the format converter 420.
  • the initial information of the neuron model is the information output by the previous neuron model.
  • the "front” and “rear” here are the front and rear of the signal flow direction.
  • the neuron model that receives the signal first is the previous-level neuron model
  • the neuron model that receives the signal later is the latter-level neuron model.
  • the topology shown in Figure 9 is an analog layer topology.
  • the present disclosure is not so limited, for example, multiple neuron models can be cascaded.
  • the technical solutions of the embodiments of the present disclosure by introducing a data format conversion device, can ensure that different neural network units can perform processing on signals in the adapted data format on the premise that two adjacent neuron models select different types of neural network units respectively.
  • a new form of heterogeneous fusion neural network hybrid operation is proposed, which effectively realizes the hybrid processing of signals of various data formats.
  • the spiking neural network unit may be gated in the first neuron model, and the artificial neural network unit may be gated in the second neuron model.
  • step S211 is specifically executed as:
  • each pulse signal output by the first neuron model is time-divisionally accumulated for a set duration to obtain an analog signal, and the analog signal is used as the input signal of the next-stage neuron model.
  • the data format conversion device needs to convert the impulse signal output by the impulse neural network unit in the previous neuron model into an analog signal adapted by the artificial neural network unit in the latter neuron model.
  • the specific implementation manner is as follows: accumulating each pulse signal output by the first neuron model in time division for a set duration to obtain the analog signal.
  • the input end of the data format conversion device is connected to the output end of the first neuron model, and the output end of the data format conversion device is connected to the input end of the second neuron model connected.
  • the input end of the data format conversion device is connected to each output end in the last hidden layer of the first neuron model, and the data The output terminal of the format conversion device is connected to the input terminal of the second neuron model.
  • each output terminal (output 1, output 2, output 3, output 4, and output 5 in the example of FIG. 11 ) in the last hidden layer of the spiking neural network unit can be drawn from the previous Outside the neuron model of the first stage, and further, the data format conversion device can no longer be connected to the output terminal of the previous neuron model, but can be connected to each output terminal in the last hidden layer of the spiking neural network unit.
  • the data format conversion device can obtain multiple pulse signals at the same time, and then can quickly accumulate the required analog signals and provide them to the artificial neural network unit of the next-level neuron model, so as to further reduce the entire neural network. Computational latency of meta-topology.
  • the artificial neural network unit can be gated in the neuron model of the previous stage, and the impulse neural network unit can be gated in the neuron model of the subsequent stage.
  • step S211 can be specifically executed as:
  • the analog signal output by the previous-stage neuron model is sampled, and the sampled pulse signal is used as the input signal of the latter-stage neuron model.
  • the data format conversion device needs to convert the analog signal output by the artificial neural network unit in the previous neuron model into the pulse signal adapted by the pulse neural network unit in the latter neuron model, so as to further ensure the correct Multi-data format signals are mixed for processing.
  • the specific use of the simulation layer is not particularly limited.
  • at least one of the simulation layers of the neural network is a retinal cell simulation layer
  • the signals input to the retinal cell simulation layer include a plurality of single-color image signals and a plurality of grayscale description signals to pass the The retinal cell simulation layer obtains the color reconstruction signal and the optical flow reconstruction signal.
  • FIG. 12 shows the neuron topology of the neural network used in this embodiment.
  • the topology includes: at least one retinal cell simulation layer, and each retinal cell simulation layer includes A plurality of neuron models as described in any embodiment of the present disclosure.
  • the neural network may include a plurality of the retinal cell simulation layers networked in a preset manner, and the neuron models in the two directly connected retinal cell simulation layers pass through the front Connect in at least one connection manner of direct connection, reverse direct connection, forward cross-layer connection, and reverse cross-layer connection, wherein the forward direction is the transmission direction of the signal from input to output.
  • At least one neuron model included in the retinal cell simulation layer is used as the first retinal simulation neuron model, and the retinal cell simulation layer The rest of the neuron model in , was used as a second retinal mimic neuron model.
  • the first retinal simulation neuron models of each retinal cell simulation layer collectively form a first retinal simulation pathway; at least one remaining neuron model included in each of the retinal cell simulation layers collectively form a second retinal simulation pathway.
  • step S220 may specifically include:
  • the multiple input grayscale description signals are processed through the neuron model in the second retinal simulation pathway to obtain the optical flow reconstruction signal.
  • different neuron models in each retinal cell simulation layer may be used without repetition to generate the final color reconstruction signal and optical flow reconstruction signal,
  • One or more neuron models may also be shared in at least one retinal cell simulation layer, which is not limited in this embodiment.
  • all single-color image signals may be separately input to the first retinal analog channel, and all grayscale description signals may be separately input to the second retinal analog channel; or, all single-color image signals may also be input.
  • all single-color image signals in all single-color image signals are input into the second retinal simulation channel for mixing calculation; alternatively, all grayscale description signals can also be input.
  • several grayscale description signals among all the grayscale description signals are simultaneously input into the first retinal simulation channel for mixing calculation and the like.
  • retinal cell simulation layer 1 along the input to output direction (ie, the signal transmission direction), retinal cell simulation layer 1, retinal cell simulation layer 2, and retinal cell simulation layer 3 are respectively included and retinal cell mimic layer 4.
  • Multiple neuron models are included in each retinal cell mimic layer.
  • the retinal cell simulation layer 1 includes neuron model 1 - neuron model 4 .
  • the input of the topology structure (the signal input into the retinal cell simulation layer 1) is a plurality of single color image signals and a plurality of grayscale description signals
  • the output of the topology structure (the signal output by the retinal cell simulation layer 4) is a color The reconstructed signal as well as the optical flow reconstructed signal.
  • the single color image signal may specifically be an image signal of three primary colors, an image signal of R color, an image signal of B color, and an image signal of G color.
  • cone cells are sensitive to absolute light intensity information and color information, so they have high image restoration accuracy, but the speed is slow.
  • rod cells cannot perceive color and absolute light intensity information, and they mainly perceive the change of light intensity information, so the speed is high and the dynamic range is large.
  • the traditional methods disclosed in the related art only model a single retinal cell or a small number of cell groups, and lack a theoretical model for large-scale retinal modeling and simulation.
  • the inventor found through research that the combination of artificial neural network units can realize high-quality color image signal reconstruction based on frequency coding, and the combination of spiking neural network units can realize event-driven high-speed optical flow signal reconstruction based on time coding. Since the technical solutions of the embodiments of the present disclosure have already constructed a general neuron model that can be gated artificial neural network or spiking neural network, the neuron model can be used as the minimum unit to construct a color image signal and light capable of reconstructing at the same time. Retinal neuron topology of flow signals.
  • a unified visual perception paradigm of heterogeneous fusion of spiking neural network units and artificial neural network units can be obtained to combine the advantages of the above two neural network units and obtain better performance when dealing with complex systems. performance and efficiency.
  • This hybrid solution is suitable for edge sensor applications, automotive applications, drone applications, robots, etc., as well as occasions requiring high precision, low latency, and high energy efficiency.
  • the retinal cell simulation layer is used to simulate the real cell layers in the retina, for example, the outer mesh layer, the inner core layer, the inner mesh layer, and the ganglion cell layer.
  • multiple neuron models in different retinal cell simulation layers are used to simulate the signal processing process of neurons in real cell layers.
  • retinal cell simulation layer 1 may be used to simulate the outer plexi layer
  • retinal cell simulation layer 2 may be used to simulate the inner core layer
  • retinal cell simulation layer 3 may be used to simulate the inner mesh layer
  • retinal cell simulation layer 4 may be used to simulate the ganglion cell layer.
  • the number of retinal cell simulation layers included in the neural network is multiple (the four retinal cell simulation layers shown in FIG. 12 are only examples).
  • the neuron models in the two retinal cell simulation layers are connected by at least one of forward direct connection, reverse direct connection, forward cross-layer connection and reverse cross-layer connection, wherein, the forward direction is the transmission direction of the signal from input to output.
  • the neuron model 1 in the retinal cell simulation layer 1 is directly connected to the neuron model 5 in the retinal cell simulation layer 2, and the neuron model 13 in the retinal cell simulation layer 4 is connected with the retinal cell simulation layer.
  • the neuron model 10 in 3 is directly connected in the reverse direction, and the neuron model 6 in the retinal cell simulation layer 2 is directly connected with the neuron model 13 in the retinal cell simulation layer 4.
  • the metamodel 13 is directly connected across layers in reverse to the neuron model 5 in the retinal cell simulation layer 2 .
  • the design method of cross-layer connection is adopted to achieve efficient feature extraction of multi-scale receptive fields (Receptive Field), especially the feature extraction of mixed receptive fields.
  • the neural network units adapted to each of the neuron models in the retinal neuron topology are obtained by training in an unsupervised learning manner.
  • the neural network unit selected by each of the neuron models in the retinal neuron topology structure uses the back-propagation algorithm, the winner-takes-all algorithm and the spike timing-dependent plasticity algorithm. At least one algorithm training of .
  • the grayscale description signal may include: a grayscale image signal or an optical flow signal.
  • the optical flow signal refers to the real-time light intensity change.
  • the real-time light intensity change specifically refers to the light intensity change of a certain pixel in the color image at a certain moment, or it can also be expressed as a relative gray value (brightness value) change.
  • the amount of change in light intensity represents the amount of change between the current brightness value of the pixel and the historical brightness value at a previous moment.
  • each neuron model in the first retinal simulation pathway selects a first neural network unit; and each neuron model in the second retinal simulation pathway selects a second neural network unit.
  • the first neural network unit may be an artificial neural network (ANN) unit
  • the second neural network unit may be a spiking neural network (SNN) unit.
  • ANN artificial neural network
  • SNN spiking neural network
  • each ANN unit in the first retinal simulation pathway can be used to simulate the parasol cell pathway in the retina of the human eye, so as to realize high-quality color image signal reconstruction based on frequency coding.
  • the cone cells, cone horizontal cells, bipolar cells, amacrine cells and parasol ganglion cells in the retina are mainly simulated;
  • each SNN unit in the second retinal simulation pathway can use It is used to simulate the dwarf cell pathway in the retina of the human eye to realize event-driven high-speed optical flow signals based on time encoding.
  • the rod cells, rod horizontal cells, bipolar cells, amacrine cells and dwarf ganglion cells in the retina are mainly simulated.
  • the single color image signal is a voltage signal of the light intensity of the light signal collected by the color image sensing circuit in the dual-modal vision sensor
  • the grayscale description signal is a dual-modal vision signal
  • the current signal of the light intensity change of the light signal obtained by the light intensity change sensing circuit in the sensor is used to input the voltage signal of the light intensity of the light signal collected by the color image sensing circuit in the dual-modal vision sensor, and the light intensity in the dual-modal vision sensor.
  • the change amount sensing circuit collects the current signal of the light intensity change amount of the light signal obtained.
  • the signal input end of the second retinal simulation pathway is used for inputting the current signal of the light intensity change of the light signal collected by the light intensity change sensing circuit in the dual-mode visual sensor.
  • the dual-modality visual sensor may specifically include: a first sensing circuit (also referred to as a light intensity variation sensing circuit) and a second sensing circuit (also known as, color image sensing circuit);
  • a first sensing circuit used for extracting the optical signal of the first set wavelength band in the target optical signal, and outputting a current signal representing the light intensity variation of the optical signal of the first set wavelength band
  • the second sensing circuit is used for extracting the optical signal of the second set wavelength band in the target optical signal, and outputting a voltage signal representing the light intensity of the optical signal of the second set wavelength band.
  • the first sensing circuit includes a first excitation type photosensitive unit and a first inhibitory type photosensitive unit, and both the first excitation type photosensitive unit and the first inhibitory type photosensitive unit are used to extract the target light signal.
  • the optical signal of the first set wavelength band and convert the optical signal of the first set wavelength band into a current signal;
  • the first sensing circuit is further configured to output an optical signal representing the first set wavelength band according to the difference between the current signals converted by the first excitation type photosensitive unit and the first inhibitory type photosensitive unit.
  • the current signal of the amount of change in light intensity.
  • the second sensing circuit includes at least one second photosensitive unit, and the second photosensitive unit is used to extract the light signal of the second set wavelength band in the target light signal, and to convert the second set wavelength band.
  • the optical signal is converted into a current signal;
  • the second sensing circuit is further configured to output a voltage signal representing the light intensity of the light signal of the second set wavelength band according to the current signal converted by the second photosensitive unit.
  • the above-mentioned voltage-current type dual-modal bionic vision sensor (dual-modal vision sensor) can be used, which can simultaneously acquire high-speed spatial gradient signals (rods, ganglion cells and horizontal cells) and low-speed color signals like the human retina. (cones), and then through the retinal neuron topology as described in the embodiments of the present disclosure, it is possible to simulate a human eye based on the voltage signal of the light intensity of the light signal and the current signal reconstructed by the light intensity change of the light signal. Color reconstruction signal and optical flow reconstruction signal.
  • the grayscale description signal input to the first retinal simulation channel may be a subset of the grayscale description signal input to the second retinal simulation channel.
  • At least one of the simulation layers may be an input signal simulation layer, and similarly, the input signal simulation layer includes at least one neuron according to any embodiment of the present disclosure Model.
  • the output terminals of the input signal simulation layer are respectively connected to the input terminal of the first retinal simulation pathway and the input terminal of the second retinal simulation pathway.
  • processing method may also include:
  • the input light simulation signal is processed by the input signal simulation layer to obtain a plurality of single-color image signals and a plurality of grayscale description signals;
  • the plurality of single-color image signals and at least one grayscale description signal are sent to the input end of the first retinal analog path through the input signal simulation layer, and a plurality of the grayscale description signals are sent to the Input to the second retinal analog pathway.
  • the rod and cone layers in the retina of the human eye can generate and obtain a plurality of single-color image signals and a plurality of grayscale description signals according to the light signal. Therefore, in addition to using the dual-modal visual sensor to obtain the input signal required by the retinal neuron topology, it is also possible to directly simulate the rod and cone layer through the input signal simulation layer, and directly obtain the input required by the retinal neuron topology. Signal.
  • the signal input to the first retinal simulation pathway (for simulating the parasol cell pathway) needs to include at least one grayscale.
  • a description signal is obtained, and a color reconstruction signal can be finally obtained through the mixing calculation of the gray-scale description signal and a plurality of single-color image signals in the first retinal simulation channel.
  • FIG. 12 shows a schematic diagram of a specific retinal neuron topology in Embodiment 6 of the present disclosure.
  • the network framework of hierarchical structure is used in the retinal neuron topology, which corresponds to the simplified multi-layer retinal structure: rods and cones layer, outer network layer, inner layer, inner network layer and node layer. cell layer.
  • the simulation network includes both a bottom-up feedforward process and a top-down feedback process.
  • heterogeneous fusion of ANN units and SNN units can be used to simulate various neurodynamic phenomena in the retina.
  • the parasol cell pathway can be simulated by the ANN unit to achieve high-quality color image signal reconstruction based on frequency coding.
  • the cone cells, cone horizontal cells, bipolar cells, amacrine cells and parasol ganglion cells in the retina are mainly simulated;
  • the SNN unit can be used to simulate the dwarf cell pathway to realize the time-coding-based simulation.
  • Event-driven high-speed optical flow signals In this pathway, the rod cells, rod horizontal cells, bipolar cells, amacrine cells and dwarf ganglion cells in the retina are mainly simulated.
  • the STDP Sespike Timing Dependent Plasticity, Spike Timing Dependent Plasticity
  • the WTA Winner Take All, winner takes all
  • Horizontal cells summarize the received signal strength of photoreceptor cells, measure the average brightness of light on the retina in a certain area, and feedback inhibitory signals to adjust the output signal of photoreceptor cells to an appropriate level, so that the signal received by bipolar cells is neither too small. Submerged in the noise of the neural pathway, it will not oversaturate the neural pathway too much, greatly improving the adaptive ability and dynamic range of the model.
  • the first three neuron models output multiple single-color image signals to the external network
  • the parasol cell pathway in the layer the latter three neuron models output multiple grayscale description signals to the dwarf cell pathway in the outer network layer, where the grayscale description signal output by the fourth neuron model is also provided to the parasol. cell pathway.
  • This embodiment provides a computer-readable storage medium on which a neuron model and an executable program are stored, where the neuron model includes at least two independent neural network units, each of which is used to process different For a signal in a data format, when the executable program is invoked, the above-mentioned signal processing method using a neuron model provided by the present disclosure can be implemented.
  • This embodiment provides a computer-readable storage medium on which a neural network and an executable program are stored,
  • the neural network includes at least one simulation layer, the simulation layer includes a data format conversion device and at least one neuron model, and the neuron model includes at least two independent neural network units, each of which is used for Process signals of different data formats;
  • This embodiment provides an electronic device, including:
  • processors one or more processors
  • One or more I/O interfaces connected between the processor and the memory, are configured to realize the information interaction between the processor and the memory.
  • This embodiment provides an electronic device, including:
  • processors one or more processors
  • One or more I/O interfaces connected between the processor and the memory, are configured to implement information interaction between the processor and the memory.
  • the processor is a device with data processing capability, including but not limited to a central processing unit (CPU), etc.
  • the memory is a device with data storage capability, including but not limited to random access memory (RAM, more specifically such as SDRAM, DDR, etc.), read only memory (ROM), electrified erasable programmable read only memory (EEPROM), flash memory (FLASH); I/O interface (read and write interface) is connected between the processor and the memory between, which includes but is not limited to a data bus (Bus) and the like.
  • RAM random access memory
  • ROM read only memory
  • EEPROM electrified erasable programmable read only memory
  • FLASH flash memory
  • I/O interface read and write interface
  • the processor, memory, and I/O interfaces are interconnected by a bus, and in turn are connected to other components of the computing device.
  • Such software may be distributed on computer-readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes both volatile and nonvolatile implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules or other data flexible, removable and non-removable media.
  • Computer storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices, or may Any other medium used to store desired information and that can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and can include any information delivery media, as is well known to those of ordinary skill in the art .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Image Analysis (AREA)

Abstract

本公开提供了一种利用神经元模型处理信号的处理方法,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号;所述方法包括:接收输入信号;根据所述输入信号确定能够处理所述输入信号的神经网络单元;利用能够处理所述输入信号的所述神经网络单元对所述输入信号进行处理。本公开还提供了一种利用神经元网络处理信号的方法、一种处理装置、一种电子设备和一种计算机可读存储介质。本公开提供的技术方案优化了现有的神经元模型的结构,实现了在对神经元功能的模拟过程中,对多种数据格式信号进行混合处理,满足了人们日益增长的个性化、便捷化的神经元拓扑结构的构造需求。

Description

利用神经元模型及网络处理信号的处理方法、介质、设备 技术领域
本公开实施例涉及人工智能技术,尤其涉及一种利用神经元模型处理信号的处理方法、一种利用神经网络处理信号的处理方法、一种计算机可读存储介质、以及一种电子设备。
背景技术
人工智能是研究使计算机来模拟人的某些思维过程和智能行为(如学习、推理、思考或规划等)的学科,主要包括计算机实现智能的原理、制造类似于人脑智能的计算机,使计算机能实现更高层次的应用。
随着人工智能技术的不断发展,涌现出了各种用于模拟人脑中神经元的神经网络单元。例如,ANN(Artificial Neural Network,人工神经网络)单元或者SNN(Spiking Neuron Networks,脉冲神经网络)单元等。
现有技术中,某一种特定类型的神经网络单元仅能处理某一特定数据格式的信号。而人脑在处理某一项复杂处理功能时,例如,视网膜成像等,可能会涉及对多种数据格式的信号进行混合处理。现有技术所提供的各种神经网络单元无法应对上述复杂的数据处理需求。
发明内容
本公开实施例提供了一种利用神经元模型处理信号的处理方法、一种利用神经网络处理信号的处理方法、一种计算机可读存储介质、以及一种电子设备,以在对复杂神经元功能的模拟过程中,实现对多种数据格式信号的混合处理。
作为本公开的第一方面,提供了一种利用神经元模型处理信号的处理方法,其中,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号;所述处理方法包括:
接收输入信号;
根据所述输入信号确定能够处理所述输入信号的神经网络单元;
利用能够处理所述输入信号的所述神经网络单元对所述输入信号进行处理。
作为本公开的第二方面,提供了一种利用神经网络处理信号的处理方法,所述神经网络包括至少一个模拟层,所述模拟层包括数据格式转换器件和至少一个神经元模型,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号;
所述利用神经网络处理信号的处理方法包括:
利用初始信息生成与各个神经元模型对应的编码方式相匹配的输入信号;
利用各个神经元模型,分别采用本公开第一个方面所提供的处理方法,对相应的输入信号进行处理,其中,所述利用初始信息生成与各个神经元模型对应的编码方式相匹配的输入信号,包括:
当所述初始信息的编码方式与处理该初始信息的神经元模型对应的编码方式不匹配时,利用数据格式转换器件将所述初始信息转换为编码类型与处理所述初始信息的神经元模型对应的编码方式相匹配的输入信号;
当所述初始信息的编码方式与处理该初始信息的神经元模型对应的编码方式匹配时,将所述初始信息作为处理该初始信息的神经元模型的输入信号。
作为本公开的第三方面,提供了一种利用神经元模型处理信号的信号处理装置,其中,所述信号处理装置包括:
神经元模型存储模块,所述神经元模型存储模块上存储有神经元模型,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号;
输入信号接收模块,所述输入信号接收模块用于接收输入信号;
目标确定模块,所述目标确定模块用于根据所述输入信号确定能够处理所述输入信号的神经网络单元;
选通器件,所述选通器件分别与各个所述神经网络单元相连,所述选通器件用于将所述输入信号接收模块与所述目标确定模块所确定的、能够处理所述输入信号的神经网络单元选通,以利用该神经网络单元对所述输入信号进行处理。
作为本公开的第四方面,提供了一种计算机可读存储介质,其上存储有神经元模型和可执行程序,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号,当所述可执行程序被调用时,能够实现本公开第一个方面所述的处理方法。
作为本公开的第五个方面,提供一种计算机可读存储介质,其上存储有神经网络和可执行程序,
所述神经网络包括至少一个模拟层,所述模拟层包括数据格式转换器件和至少一个神经元模型,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号;
当所述可执行程序被调用时,能够本公开第二个方面所提供的处理方法。
作为本公开的第六个方面,提供一种电子设备,包括:
一个或多个处理器;
存储器,其上存储有神经元模型和一个或多个可执行程序,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现根据本公开第一个方面所提供的处理方法;
一个或多个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
作为本公开的第七个方面,提供一种电子设备,包括:
一个或多个处理器;
存储器,其上存储有神经网络和一个或多个可执行程序,所述神经网络包括至少一个模拟层,所述模拟层包括数据格式转换器件和至少一个神经元模型,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现根据本公开第二个方面所提供的处理方法;
一个或多个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
本公开实施例的技术方案提供了一种新型的神经元模型,该神经元模型中包括至少两个独立的神经网络单元,每个所述神经网络单元用于处理不同数据格式的信号,且在所述神经元模型的各所述神经网络单元中,仅选通一个神经网络单元。通过应用上述神经元模型,可以通过选通控制的方式构建满足神经元处理功能的大规模的神经元拓扑结构,解决了现有技术无法模拟神经元对异构信息进行混合处理的 技术缺陷,提出了一种异构融合的神经网络混合运算的新形式,优化了现有的神经元模型的结构,在对复杂神经元功能的模拟过程中,实现了对多种数据格式信号进行混合处理,满足了人们日益增长的个性化、便捷化的神经元拓扑结构的构造需求。
附图说明
图1是本公开实施例一所提供的处理方法的流程图;
图2是本公开实施例一中的一种神经元模型的示意图;
图3是步骤S120的一种实施方式的流程图;
图4是本公开实施例二中的一种处理装置的模块示意图;
图5是本公开实施例二中的另一种处理装置的模块示意图;
图6是本公开实施例二中的还一种处理装置的模块示意图;
图7是本公开实施例三中的一种模拟层的示意图;
图8是本公开实施三中所提供的方法的流程图;
图9是本公开实施例三中所涉及的模拟层的拓扑结构示意图;
图10是本公开实施例三中另一种模拟层的示意图;
图11是本公开实施例三中还一种模拟层的示意图;
图12是本公开实施例三中的一种视网膜神经元拓扑结构的示意图;
图13是本公开实施例三中的一种具体的视网膜神经元拓扑结构的示意图。
具体实施方式
下面结合附图和实施例对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本公开,而非对本公开的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本公开相关的部分而非全部结构。
另外还需要说明的是,为了便于描述,附图中仅示出了与本公开相关的部分而非全部内容。在更加详细地讨论示例性实施例之前应当提到的是,一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将各项操作(或步骤)描述成顺序的处理,但是其中的许多操作可以被并行地、并发地或者同时实施。此外,各项操作的顺序可以被重新安排。当其操作完成时所述处理可以被终止,但是还可以具有未包括在附图中的附加步骤。所述处理可以对应于方法、函数、规程、子例程、子程序等等。
实施例一
图1为本公开实施例一提供的利用神经元模型处理信号的处理方法,如图2所示,所述神经元模型包括至少两个独立的神经网络单元(在图2中示出为神经网络单元1、神经网络单元2、…、神经网络单元n)。每个所述神经网络单元用于处理不同数据格式的信号。
如图1中所示,所述处理方法包括:
在步骤S110中,接收输入信号;
在步骤S120中,根据所述输入信号确定能够处理所述输入信号的神经网络单元;
在步骤S130中,利用能够处理所述输入信号的所述神经网络单元对所述输入 信号进行处理。
在利用所述神经元模型处理输入信号时,所述神经元模型的各所述神经网络单元中,选通一个神经网络单元。
在本实施例中,神经网络单元具体是指用于模拟神经元计算功能的一个最小计算单元,该神经网络单元可以通过软件,和/或硬件的方式实现,在某一个神经网络单元中,具体可以对某一种数据格式的信号进行处理。其中,所述数据格式可以包括:模拟信号、脉冲信号以及数字电平信号等。在该神经元模型中,不同的神经网络单元,用于处理不同数据格式的信号。
一般来说,不同数据格式的信号所对应的计算精度,或者信号编码方式均不相同。例如,神经网络单元A是用于处理模拟信号的,则该神经网络单元A可以处理并传输更高精度的信号,但由于该神经网络单元A无法对模拟信号进行编码处理,该神经网络单元A所能处理的计算任务形式就会相对单一,应用场景受限;神经网络单元B是用于处理脉冲信号的,则该神经网络单元B可以适用处理丰富的编码形式的信号,例如,时域编码或者时空域编码等。进而,该神经网络单元B的应用场景比较广泛,但是信号精度不高。
如前所述,相关技术中,在构建得到某一特定类型的神经网络时,一般仅是使用处理某一数据格式信号的神经网络单元组合得到,因此,该神经网络仅能处理某一特定数据格式的信号。在本实施例中,发明人创造性地提出了一种新的神经元模型的结构。在该神经元模型中,同时包括有多个相互独立的神经网络单元,该多个神经网络单元分别用于处理不同数据格式的信号,在实际使用时,可以根据实际需要处理信号的数据格式,在利用该神经元模型处理输入信号时,选通与输入信号匹配的神经网络单元,通过使用将上述多个神经网络单元组合得到的神经元拓扑结构,可以实现对不同数据格式的信号进行混合处理。
可选地,该神经元模型可以具有多个输入端和多个配套的输出端,选连不同的输入端、输出端,可以对应选通不同的神经网络单元;或者,该神经元模型可以具有单一输入端和单一输出端,而在该神经元模型内部具有一个选通开关,通过该选通开关,可以在该神经元模型内部将选通的神经网络单元选连至该单一输入端和单一输出端。
或者,可以根据该神经元模型所选通的不同神经网络单元,或者不同的信号处理需求,配置不同的输出端。
例如,如果该神经元模型选通用于处理模拟信号的神经网络单元,则可以将该神经网络单元的输出端直接作为该神经元模型的输出端;如果该神经元模型选通用于处理脉冲信号的神经网络单元,而后续仅需要该神经元模型处理得到的单一脉冲信号时,可以选择将该神经网络单元的输出端直接作为该神经元模型的输出端,或者,如果该神经元模型选通用于处理脉冲信号的神经网络单元,而后续仅需要该神经元模型处理得到的多个脉冲信号时,可以选择将该神经网络单元最后一个隐藏层的全部(或者多个)输出端均作为该神经元模型的输出端等,本实施例对此并不进行限制。
本公开实施例的技术方案提供了一种新型的神经元模型,该神经元模型中包括至少两个独立的神经网络单元,每个所述神经网络单元用于处理不同数据格式的输入信号,且在所述神经元模型的各所述神经网络单元中,仅选通一个神经网络单元。通过应用上述神经元模型,可以通过选通控制的方式构建满足神经元处理功能的大规模的神经元拓扑结构,解决了现有技术无法模拟神经元对异构信息进行混合处理的技术缺陷,提出了一种异构融合的神经网络混合运算的新形式,优化了现有的神经元模型的结构,在对复杂神经元功能的模拟过程中,实现了对多种数据格式信号 进行混合处理,满足了人们日益增长的个性化、便捷化的神经元拓扑结构的构造需求。
在上述各实施例的基础上,所述神经元模型中的神经网络单元可以包括:第一神经网络单元和第二神经网络单元;
所述第一神经网络单元,用于对模拟信号进行处理并输出;所述第二神经网络单元,用于对脉冲信号进行处理并输出。
在本实施例中,通过在该神经元模型中设置用于处理模拟信号的第一神经网络单元,以及用于处理脉冲信号的第二神经网络单元,可以通过统一使用本公开实施例提供的神经元模型的方式,构造得到能够对模拟信号和脉冲信号进行混合处理的异构融合的神经网络,进而使得该异构融合的神经网络在处理模拟信号时,具有较大的信号精度,同时在处理脉冲信号时,具有更多的编码方式选择。
在上述各实施例的基础上,所述第一神经网络单元可以为人工神经网络(ANN)单元,所述第二神经网络单元可以为脉冲神经网络(SNN)单元。
其中,ANN单元直接处理模拟信息,并传输高精度的激活。它们在需要高精度计算的场景中显示出强大的能力,但涉及到巨大的计算操作,导致功率消耗大和延迟大。SNN单元通过内在神经元动力学记忆历史时间信息,并将代码信息输入数字尖峰序列,从而实现事件驱动计算。输入数据通常包含时间信息,并且数据流是稀疏的。因此,SNN单元比ANN单元具有更加丰富的信息编码方式,比如时域编码、时空域编码、群编码、贝叶斯编码、时滞编码、稀疏编码等。由于可进行时空一体化编码,SNN单元在处理复杂时空信息和多模态的信息任务时,具有巨大的潜力。但是,SNN单元的不连续性、时空编码的复杂性以及网络结构的不确定性导致其很难在数学上完成对网络整体的描述,难以构建有效且通用的有监督学习算法。因此,SNN单元通常计算量有限,能耗低,但是精度不高。由此可见,ANN单元和SNN单元优势互补,适于不同的应用场景,进而,通过使用本公开实施例所提供的神经元模型,可以适配综合使用ANN单元和SNN单元优势的应用场景。
在本公开中,对如何具体执行步骤S120不做特殊的限定。可选地,如图3所示,步骤S120可以包括:
在步骤S121中,解析输入信号中包括的路由选择信息;
在步骤S122中,将神经网络单元标识与所述路由选择信息中携带的神经网络单元标识一致的神经网络单元确定为能够处理所述输入信号的神经网络单元。
在对输入信号进行编码时,可以利用能够处理所述输入信号的神经元网络单元的神经网络单元标识生成路由选择信息。
示例性的,以包括第一神经网络单元和第二神经网络单元的神经元模型为例,假设第一神经网络单元的神经网络单元标识为0001,第二神经网络单元的神经网络单元标识为0010。第一神经网络单元为能够处理所述输入信号的神经网络单元,那么,在对输入信号进行编码时,利用“0001”进行编码生成路由选择信息。
相应地,在神经元模型接收到输入信号后,解析其中携带的路由选择信息,并获得神经网络单元标识“0001”,随后将“0001”与第一神经网络单元的神经网络单元标识0001、以及第二神经网络单元的神经网络单元标识0010进行对比,路由选择信息中携带的神经网络单元标识与第一神经网络单元的神经网络单元标识一致,最终将第一神经网络单元确定为能够处理输入信号的神经网络单元。
在本公开中,可以使用多个神经元模型(等同于传输节点)进行组网,得到神经元网络,并在发送至该神经元网络中的每个输入信号中均加入路由选择信息,该路由选择信息中指明了每个输入信号需要经过的神经元模型的顺序,以及所经过的每个神经元模型所需选通的神经网络单元(即,能够处理输入信号的神经网络单元)。 进而,当某一神经元模型收到了输入信号后,通过解析该输入信号中包括的路由选择信息,可以简单、便捷的确定出待选通的神经网络单元。
实施例二
图4为本公开实施例二提供的一种利用神经元模型处理信号的信号处理装置,如图4所示,所述信号处理装置包括神经元模型存储模块210、输入信号接收模块220、目标确定模块230和选通器件240。
神经元模型存储模块210上存储有实施例一中所描述的神经元模型,即,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号。
输入信号接收模块220用于执行步骤S110,即,输入信号接收模块220用于接收输入信号。
目标确定模块230用于根据所述输入信号确定能够处理所述输入信号的神经网络单元。
选通器件240,该选通器件240分别与各个所述神经网络单元相连,所述选通器件240用于将所述输入信号接收模块与所述目标确定模块所确定的、能够处理所述输入信号的神经网络单元选通,以利用该神经网络单元对所述输入信号进行处理。
所述处理装置用于执行本公开实施例一所提供的处理方法,上文中已经对所述处理方法的优点、以及有益效果进行了详细描述,这里不再赘述。
图5中示出一种处理装置的模块示意图。在图5中,所述神经元模型包括两个神经网络单元。为了便于描述,将两个神经网络单元分别称为第一神经网络单元211和第二神经网络单元212。在图5中,选通器件240分别与第一神经网络单元211和第二神经网络单元212相连,以表示选通器件240可以从第一神经网络单元211和第二神经网络单元212中选择一者作为处理输入信号的神经网络单元。
在图5中所示的实施方式中,神经元模型可以仅具有单一输入端和单一输出端,在神经元模型的内部,单一输入端与该选通器件240的输入端相连,该选通器件240的输出端同时与该第一神经网络单元211和第二神经网络单元212的输入端相连,该第一神经网络单元211和第二神经网络单元212的输出端分别与该单一输出端相连。
作为一种可选实施方式,选通器件240可以为手控器件(例如,拨码开关或者点触开关等)也可以为程控器件(例如,程控开关或者多选一数据选择器,本实施例为二选一数据选择器)等,本公开中对此并不进行限制。通过使用选通器件,可以通过手动控制,或者控制器控制的方式,对该选通器件所选通的通路进行控制,以使得该神经元模型选通所需的神经网络单元。
本公开实施例的技术方案通过在该神经元模型内部设置选通器件,简化了该神经元模型的对外连接方式,仅通过单一输入端和单一输出端即可和其他神经元模型进行互连。进而,仅通过对所互连的各神经元模型中的选通器件的简单控制,即可灵活得到所需的各种神经元拓扑结构。
在上述各实施例的基础上,所述选通器件可以为程控器件,所述处理装置还可以包括:选通控制器件250,该选通控制器件250与选通器件240相连;
所述选通控制器件250用于根据能够处理所述输入信号的神经网络单元,向所述选通器件发送与能够处理所述输入信号的神经网络单元匹配的选通控制指令。
相应地,所述选通器件240具体用于根据所述选通控制指令,选通所述神经元模型中能够处理所述输入信号的神经网络单元。
在本可选实施方式中,可以使用程控的方式,控制选通器件240,以进一步提 高整个神经元模型的智能化和通用性。具体的,在本实施例中,可以使用选通控制器250件对该选通器件240进行选通控制。可选的,该选通控制器件250可以为控制芯片,该选通控制器250可以根据用户输入的控制指令(有线,和/或无线的方式接收)确定当前需要选通的神经网络单元。或者直接根据目标确定模块230的输出结果确定能够处理输入至该神经元模型中的输入信号的神经网络单元。选通控制器250进而可以根据预设的对应关系,也即,需要选通的神经网络单元与选通控制指令之间的对应关系,确定与能够处理输入信号的神经网络单元匹配的选通控制指令,并将该选通控制指令发送至选通器件240。
可选的,所述处理装置还可以包括:无线收发模块,该无线收发模块与选通控制器件250相连,该选通控制器件250可以根据无线收发模块接收到的控制指令,或者输入至该神经元模型中的输入信号,确定需要被选通的神经网络单元。
设置选通控制器件250的好处在于:通过简单的程控操作,即可将由多个神经元模型构建得到的神经元拓扑结构进行动态调整,进一步扩展了神经元模型的使用场景。
实施例三
图6为本公开实施例三提供的处理装置的示意图,如图6所示,在本实施例中,选通控制器件250与所述神经元模型的输入端相连。
相应的,所述选通控制器件250用于,解析输入信号中包括的路由选择信息,相应地,目标确定模块(未示出)用于根据所述路由选择信息,确定待选通的神经网络单元。进一步地,选通控制器件250向选通器件240发送与待选通的神经网络单元匹配的选通控制指令。进而,选通器件240,具体用于根据所述选通控制指令,选通所述神经元模型中的相应的神经网络单元,并将该输入信号相应发送至该被选通的神经网络单元(也即,第一神经网络单元211和第二神经网络单元212中的一者)。
在本实施例中,可以使用多个神经元模型(等同于传输节点)进行组网,得到神经元网络,并在发送至该神经元网络中的每个输入信号中均加入路由选择信息,该路由选择信息中指明了每个输入信号需要经过的神经元模型的顺序,以及所经过的每个神经元模型所需选通的目标神经网络单元。进而,当某一神经元模型收到了输入信号后,通过解析该输入信号中包括的路由选择信息,可以简单、便捷的确定出待选通的目标神经网络单元。
通过上述设置,可以基于预先构建标准神经网络,仅通过简单的设置不同的路由选择信息,灵活实现多种不同类型的神经元拓扑结构,进一步扩展了神经元模型的使用场景以及通用性。
实施例四
图7是本公开实施例四中的一种神经网络的神经元拓扑结构的示意图,如图7所示,该神经网络中包括至少一个模拟层,所述模拟层包括数据格式转换器420和至少一个神经元模型。神经元模型为公开实施例一至实施例三中所提供的神经元模型,即,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号。
如图8所示,本实施例提供一种利用神经网络处理信号的方法,该方法包括:
在步骤S210中,利用初始信息生成与各个神经元模型对应的编码方式相匹配的输入信号;
在步骤S220中,利用各个神经元模型,分别采用本公开实施例一所提供的处 理方法,对相应的输入信号进行处理。
其中,所述利用初始信息生成与各个神经元模型对应的编码方式相匹配的输入信号(即,步骤S210)包括:
在步骤S211中,当所述初始信息的编码方式与处理该初始信息的神经元模型对应的编码方式不匹配时,利用数据格式转换器件将所述初始信息转换为编码类型与处理所述初始信息的神经元模型对应的编码方式相匹配的输入信号;
在步骤S212中,当所述初始信息的编码方式与处理该初始信息的神经元模型对应的编码方式匹配时,将所述初始信息作为处理该初始信息的神经元模型的输入信号。
在本实施例所提供的方法中,对神经网络的模拟层中所包括的神经元模型的具体数量不做特殊的限定。例如,所述模拟层中可以包括一个神经元模型、也可以包括多个神经元模型。
任意一个神经元模型均能够按照本公开实施例一种所提供的处理方法处理数据,因此,本实施例所提供的方法也具有对多种数据格式信号进行混合处理的优点。
在本实施例中所提供的神经网络具有本公开各实施例所提供的神经元模型所适用的神经元拓扑结构,在该神经元拓扑结构中,给出了一种对多种数据格式信号进行混合处理的实现方式。
图9中所示的是所述神经网络中一个模拟层的拓扑结构示意图。如图中所示,该模拟层中不同神经元模型可以通过直连或者中继的方式两两进行信号传输。其中,在所述模拟层中的至少两个直连的神经元模型之间,可以连接有数据格式转换器件,以局部形成神经元拓扑结构。
在本公开中,对“初始信息”的具体类型和来源不做特殊的限定。例如,“初始信息”可以由外部输入,也可以由其他神经元模型输出。对于直接与神经网络的输入端相连的神经元模型而言,其初始信息为从神经网络的外部输入的信息。
需要指出的是,初始信息中携带有路由选择信息。
可选地,所述路由选择信息包括:路由神经元模型标识,以及路由神经元模型中选通的神经网络单元标识。所谓的“路由神经元模型”是指,在路由选择信息所表明的路径中的各个神经元模型。
具体的,该路由选择信息中包括顺序构成的多个路由神经元模型标识,也即,路由神经元模型标识序列,该路由神经元模型标识序列中指明了输入信号需要顺序输入至哪些神经元模型(也即,路由神经元模型)中进行信号处理,并进一步通过神经网络单元标识指明每个路由神经元模型需要选通哪一个神经网络单元对该网络输入信号进行处理。
其中,各个路由神经元模型在对相应的输入信号进行处理后,可以在该路由神经元模型标识序列中标识出当前的信号处理位置并输出,以使得接收到该处理后的信号的神经元模型能够快速确定出该信号的下一跳的神经元模型,以及该下一跳的神经元模型中所选通的神经网络单元。
当外部提供的网络输入信号输入至该神经元网络之后,每个神经元模型都可能接收到该网络输入信号(或者,已经被其他神经元模型处理过的网络输入信号),可以将上述两种信号统称为目标输入信号。
相应的,当某一个神经元模型接收到该目标输入信号后,可以从该目标输入信号中解析出路由选择信息,并进而获取下一跳的神经元模型,以及该下一跳的神经元模型中所选通的神经网络单元。其中,可以为每个神经元模型设置神经元模型标识,这样,每个神经元模型在通过解析路由选择信息中包括的路由神经元模型标识后,可以确定下一跳的神经元模型是否为自身。
作为一种可选实施方式,所述模拟层中包括多个神经元模型,为了在该模拟层中发挥不同类型神经网络单元的优点,不同的神经元模型中可以分别选通不同类型的神经网络单元。
为了便于理解,下面以图10中所示的包括分别为第一神经元模型411和第二神经元模型412的模拟层为例进行介绍。
在第一神经元模型411中选通神经网络单元A,该神经网络单元A用于处理第一数据格式的信号;在第二神经元模型412中选通神经网络单元B,该神经网络单元B用于处理第二数据格式的信号。这样,模拟层处理数据时,既具有神经网络单元A的优点、又具有神经网络B的优点。
为了实现对不同数据格式的信号的混合处理,在本实施例中,引入了数据格式转换器件420,以实现对不同数据格式信号的转换。对于包括第一神经元模型411和第二神经元模型412的实施方式中,可以根据第一数据格式和第二数据格式的具体形式,选择不同类型的数据格式转换器件420,例如,脉冲信号转模拟信号转换器或者模拟信号转脉冲信号转换器等;或者,可以在统一的数据格式转换器件420中预先配置不同类型的数据格式转换单元,并根据第一数据格式和第二数据格式的具体形式,选通匹配的数据格式转换单元等,本实施例对此并不进行限制。
在本公开中,当所述模拟层包括多个神经元模型时,多个神经元模型可以通过预设的方式组网,直连的两个神经元模型通过格式转换器420连接,后一级神经元模型的初始信息为前一级神经元模型输出的信息。此处的“前”、“后”为信号流方向的前、后。在两个神经元模型中,先接收到信号的神经元模型为前一级神经元模型,后接收到信号的神经元模型为后一级神经元模型。例如,图9中所示的便是一种模拟层的拓扑结构。当然,本公开并不限于此,例如,多个神经元模型可以级联。
本公开实施例的技术方案通过引入数据格式转换器件,可以在相邻两级神经元模型分别选通不同类型的神经网络单元前提下,保证不同神经网络单元能够对所适配数据格式的信号进行处理,提出了一种异构融合的神经网络混合运算的新形式,有效实现了对多种数据格式信号进行混合处理。
在本实施例的一个可选的实施方式中,所述第一神经元模型中可以选通脉冲神经网络单元,所述第二神经元模型中可以选通人工神经网络单元。
相应地,步骤S211被具体执行为:
利用数据格式转换器件将所述第一神经元模型分时输出的各脉冲信号进行设定时长的累加,得到模拟信号,并将所述模拟信号作为后一级神经元模型的输入信号。
在本可选实施方式中,数据格式转换器件需要将前一级神经元模型中脉冲神经网络单元输出的脉冲信号,转换为后一级神经元模型中人工神经网络单元所适配的模拟信号。具体实现方式为,将第一神经元模型分时输出的各脉冲信号进行设定时长的累加,得到该模拟信号。
作为一种可选实施方式,所述数据格式转换器件的输入端与所述第一神经元模型的输出端相连,所述数据格式转换器件的输出端与所述第二神经元模型的输入端相连。通过这种设置可以简单、便捷的得到多个脉冲信号。
当然,本公开并不限于此,作为另一种可选实施方式,所述数据格式转换器件的输入端与所述第一神经元模型的最后一个隐藏层中的各输出端相连,所述数据格式转换器件的输出端与所述第二神经元模型的输入端相连。
在本可选实施方式中,如图11所示,发明人通过分析脉冲神经网络单元的结构,发现虽然脉冲神经网络单元的输出层仅能输出单一的脉冲信号,但是最后一个隐藏层则是可以输出多个脉冲信号,以供输出层最终得到该单一的脉冲信号。因此, 在本可选实施方式中,可以将脉冲神经网络单元最后一个隐藏层中的各输出端(图11示例的输出1、输出2、输出3、输出4以及输出5)均引出该前一级神经元模型的外部,进而,数据格式转换器件可以不再与前一级神经元模型的输出端相连,而是可以与脉冲神经网络单元最后一个隐藏层中的各输出端。相应的,该数据格式转换器件可以在同一时刻分别获取多个脉冲信号,进而可以快速累加得到所需的模拟信号,提供至后一级神经元模型的人工神经网络单元中,以进一步减少整个神经元拓扑结构的计算时延。
在上述各实施例的基础上,前一级神经元模型中可以选通人工神经网络单元,后一级神经元模型中可以选通脉冲神经网络单元。
相应地,步骤S211可以被具体执行为:
按照设定采样规则,对前一级神经元模型输出的模拟信号进行信号采样,并将采样得到的脉冲信号作为后一级神经元模型的输入信号。
通过上述设置,数据格式转换器件需要将前一级神经元模型中人工神经网络单元输出的模拟信号,转换为后一级神经元模型中脉冲神经网络单元所适配的脉冲信号,以进一步保证对多种数据格式信号进行混合处理。
实施例五
在本公开中,对模拟层的具体用途不做特殊的限定。例如,所述神经网络的所述模拟层中的至少一个为视网膜细胞模拟层,输入至所述视网膜细胞模拟层的信号包括多个单一色彩图像信号和多个灰度描述信号,以通过所述视网膜细胞模拟层得到彩色重建信号以及光流重建信号。
图12示出了本实施例中用到的神经网络的神经元拓扑结构,如图12所示,所述拓扑结构中包括:至少一个视网膜细胞模拟层,每个所述视网膜细胞模拟层中包括多个如本公开任一实施例所述的神经元模型。
作为一种可选实施方式,所述神经网络可以包括按照预设方式组网的多个所述视网膜细胞模拟层,直连的两个所述视网膜细胞模拟层中的神经元模型之间通过前向直连、反向直连、前向跨层连接和反向跨层连接中的至少一种连接方式进行连接,其中,所述前向为信号从输入至输出的传输方向。
为了真实地模拟视网膜,可选地,对于任意一个视网膜细胞模拟层而言,所述视网膜细胞模拟层中包括的至少一个神经元模型用作第一视网膜模拟神经元模型,所述视网膜细胞模拟层中的其余神经元模型用作第二视网膜模拟神经元模型。各个视网膜细胞模拟层的第一视网膜模拟神经元模型共同组成第一视网膜模拟通路;各个所述视网膜细胞模拟层中包括的至少一个其余神经元模型共同组成第二视网膜模拟通路。
相应地,步骤S220可以具体包括:
通过所述第一视网膜模拟通路中的神经元模型对输入的多个单一色彩图像信号以及至少一个灰度描述信号进行融合,得到彩色重建信号;
通过所述第二视网膜模拟通路中的神经元模型对输入的多个灰度描述信号进行处理,得到光流重建信号。
可选的,所述第一视网膜模拟通路和所述第二视网膜模拟通路中,可以不重复的使用每一个视网膜细胞模拟层中不同的神经元模型生成最终的彩色重建信号以及光流重建信号,也可以在至少一个视网膜细胞模拟层中共享一个或者多个神经元模型,本实施例对此并不进行限制。
可选的,可以将全部单一色彩图像信号单独输入至所述第一视网膜模拟通路,将全部灰度描述信号单独输入至所述第二视网膜模拟通路;或者,也可以将全部单 一色彩图像信号输入至所述第一视网膜模拟通路的同时,将全部单一色彩图像信号中的若干个单一色彩图像信号输入至所述第二视网膜模拟通路中进行混合计算;或者,也可以将全部灰度描述信号输入至所述第二视网膜模拟通路的同时,将全部灰度描述信号中的若干个灰度描述信号同时输入至所述第一视网膜模拟通路中进行混合计算等。
作为示例而非限定,在图12中所示的拓扑结构中,沿输入至输出方向(也即,信号传输方向)分别包括有视网膜细胞模拟层1、视网膜细胞模拟层2、视网膜细胞模拟层3以及视网膜细胞模拟层4。每个视网膜细胞模拟层中均包括多个神经元模型。例如,视网膜细胞模拟层1中包括神经元模型1-神经元模型4。该拓扑结构的输入(输入至视网膜细胞模拟层1中的信号)为多个单一色彩图像信号和多个灰度描述信号,该拓扑结构的输出(由视网膜细胞模拟层4输出的信号)为彩色重建信号以及光流重建信号。
在本实施例中,单一色彩图像信号具体可以为R色彩图像信号、B色彩图像信号以及G色彩图像信号这三原色图像信号。
其中,考虑到人眼视网膜主要包括两种视觉感知细胞,视锥细胞与视杆细胞。其中视锥细胞对绝对光强信息与颜色信息敏感,因此具有很高的图像还原精度,但速度较慢。相反,视杆细胞无法感知颜色与绝对光强信息,其主要对光强信息的变化量进行感知,因而速度很高,且动态范围较大。
目前,相关技术中公开的传统方法只有对视网膜单一细胞或少数细胞群的建模,缺乏对大规模视网膜建模与仿真的理论模型。发明人通过研究发现,人工神经网络单元的组合,可以实现基于频率编码的高质量的彩色图像信号重建,而脉冲神经网络单元的组合,可以实现基于时间编码的事件驱动高速光流信号的重建。由于本公开实施例的技术方案已经构造得到了通用的可以选通人工神经网络或者脉冲神经网络的神经元模型,因此,可以使用该神经元模型作为最小单元,构建得到能够同时重建彩色图像信号和光流信号的视网膜神经元拓扑结构。
通过构建上述视网膜神经元拓扑结构,可以得到脉冲神经网络单元和人工神经网络单元的异构融合的统一视觉感知范例,以结合上述两种神经网络单元的优点,在处理复杂系统时获得更好的性能和效率。该混合方案适用于边缘传感器应用、汽车应用、无人机应用、机器人等,以及同时要求高精度、低延迟、高能效的场合。
在本实施例中,所述视网膜细胞模拟层用于模拟视网膜中真实的细胞层,例如,外网层、内核层、内网层以及节细胞层等。同时,使用不同视网膜细胞模拟层中的多个神经元模型,模拟真实细胞层中神经元的信号处理过程。可选的,可以使用视网膜细胞模拟层1模拟外网层、使用视网膜细胞模拟层2模拟内核层、使用视网膜细胞模拟层3模拟内网层以及使用视网膜细胞模拟层4模拟节细胞层。
在上述各实施例的基础上,所述神经网络中包括的视网膜细胞模拟层的数量为多个(图12中示出的四个视网膜细胞模拟层仅为示例)。
如上文中所述,两个所述视网膜细胞模拟层中的神经元模型通过前向直连、反向直连、前向跨层连接和反向跨层连接中的至少一种连接方式进行连接,其中,所述前向为信号从输入至输出的传输方向。
如图12所示,视网膜细胞模拟层1中的神经元模型1与视网膜细胞模拟层2中的神经元模型5前向直连,视网膜细胞模拟层4中的神经元模型13与视网膜细胞模拟层3中的神经元模型10反向直连,视网膜细胞模拟层2中的神经元模型6与视网膜细胞模拟层4中的神经元模型13前向跨层直连,视网膜细胞模拟层4中的神经元模型13与视网膜细胞模拟层2中的神经元模型5反向跨层直连。其中,在分层处理的基础上,采用跨层连接的设计方式,实现高效的多尺度感受野 ((Receptive Field)的特征提取,尤其是混合感受野的特征提取。
在上述各实施例的基础上,所述视网膜神经元拓扑结构中各所述神经元模型适配的神经网络单元,通过无监督学习的方式训练得到。
在上述各实施例的基础上,所述视网膜神经元拓扑结构中各所述神经元模型所选通的神经网络单元,使用反向传播算法、赢者通吃算法以及穗时序依赖型可塑性算法中的至少一项算法训练得到。
在上述各实施例的基础上,所述灰度描述信号可以包括:灰度图像信号,或者光流信号。
所述光流信号,是指实时光强变化量。该实时光强变化量,具体是指在某一时刻下,彩色图像中某一像素点的光强变化量,或者,也可以表述为一个相对的灰度值(亮度值)变化量,该实时光强变化量表征该像素点当前的亮度值与之前某一时刻的历史亮度值之间的变化量。
可选地,所述第一视网膜模拟通路中的各神经元模型选通第一神经网络单元;所述第二视网膜模拟通路中的各神经元模型选通第二神经网络单元。
进一步的,该第一神经网络单元可以为人工神经网络(ANN)单元,该第二神经网络单元可以为脉冲神经网络(SNN)单元。
可选的,第一视网膜模拟通路中的各ANN单元可以用于模拟人眼视网膜中的阳伞细胞通路,用于实现基于频率编码的高质量的彩色图像信号重建。在该第一视网膜模拟通路中,主要对视网膜中的视锥细胞、视锥水平细胞、双极细胞、无长突细胞与阳伞节细胞进行仿真;第二视网膜模拟通路中的各SNN单元可以用于模拟人眼视网膜中的侏儒细胞通路,实现基于时间编码的事件驱动高速光流信号。在该通路中,主要对视网膜中的视杆细胞、视杆水平细胞、双极细胞、无长突细胞与侏儒节细胞进行仿真。
在上述各实施例的基础上,所述单一色彩图像信号为双模态视觉传感器中彩色图像传感电路采集得到的光信号的光强的电压信号,所述灰度描述信号为双模态视觉传感器中光强变化量传感电路采集得到的光信号的光强变化量的电流信号。具体地,所述第一视网膜模拟通路的信号输入端,用于输入双模态视觉传感器中彩色图像传感电路采集得到的光信号的光强的电压信号,以及双模态视觉传感器中光强变化量传感电路采集得到的光信号的光强变化量的电流信号。所述第二视网膜模拟通路的信号输入端,用于输入双模态视觉传感器中光强变化量传感电路采集得到的光信号的光强变化量的电流信号。
在本公开实施例的一个可选实施方式中,所述双模态视觉传感器可以具体包括:第一传感电路(也可以称为,光强变化量传感电路)和第二传感电路(也可以称为,彩色图像传感电路);
第一传感电路,用于提取目标光信号中第一设定波段的光信号,并输出表征所述第一设定波段的光信号的光强变化量的电流信号;
第二传感电路,用于提取目标光信号中第二设定波段的光信号,并输出表征所述第二设定波段的光信号的光强的电压信号。
可选的,所述第一传感电路包括第一兴奋型感光单元和第一抑制型感光单元,所述第一兴奋型感光单元和所述第一抑制型感光单元均用于提取目标光信号中第一设定波段的光信号,并将所述第一设定波段的光信号转换为电流信号;
所述第一传感电路还用于根据所述第一兴奋型感光单元和所述第一抑制型感光单元转换的电流信号之间的差异,输出表征所述第一设定波段的光信号的光强变化量的电流信号。
可选的,所述第二传感电路包括至少一个第二感光单元,所述第二感光单元用 于提取目标光信号中第二设定波段的光信号,并将所述第二设定波段的光信号转换为电流信号;
所述第二传感电路还用于根据所述第二感光单元转换的电流信号,输出表征所述第二设定波段的光信号的光强的电压信号。
通过上述设置,可以使用上述电压电流型双模态仿生视觉传感器(双模态视觉传感器),可以像人眼视网膜一样同时获取高速空间梯度信号(视杆、节细胞与水平细胞)与低速彩色信号(视锥),进而通过如本公开实施例所述的视网膜神经元拓扑结构,可以模拟出人眼基于该光信号的光强的电压信号以及光信号的光强变化量的电流信号重建得到的彩色重建信号以及光流重建信号。
在上述各实施例的基础上,输入至所述第一视网膜模拟通路中的灰度描述信号,可以为输入至所述第二视网膜模拟通路中的灰度描述信号的子集。
在上述各实施例的基础上,所述模拟层中的至少一者可以为输入信号模拟层,同样地,所述输入信号模拟层中包括至少一个如本公开任一实施例所述的神经元模型。
所述输入信号模拟层输出端分别与所述第一视网膜模拟通路的输入端、以及所述第二视网膜模拟通路的输入端相连。
相应地,所述处理方法还可以包括:
通过所述输入信号模拟层将输入的光线模拟信号进行处理,得到多个单一色彩图像信号以及多个灰度描述信号;
通过所述输入信号模拟层将所述多个单一色彩图像信号以及至少一个灰度描述信号发送至所述第一视网膜模拟通路的输入端,并将多个所述灰度描述信号发送至所述第二视网膜模拟通路的输入端。
在本实施例中,考虑到人眼视网膜中的视杆视锥层可以根据光线信号,生成得到多个单一色彩图像信号以及多个灰度描述信号。因此,除了可以使用双模态视觉传感器得到视网膜神经元拓扑结构所需的输入信号,还可以直接通过输入信号模拟层模拟视杆视锥层,直接模拟得到该视网膜神经元拓扑结构所需的输入信号。
需要再次强调的是,不论使用任何方式得到该多个单一色彩图像信号以及多个灰度描述信号,输入至第一视网膜模拟通路(用于模拟阳伞细胞通路)的信号中需要包括至少一个灰度描述信号,并通过第一视网膜模拟通路中的该灰度描述信号与多个单一色彩图像信号的混合计算,可以最终得到彩色重建信号。
其中,在图12示出了本公开实施例六中的一种具体的视网膜神经元拓扑结构的示意图。
如图12所示,该视网膜神经元拓扑结构中使用分层结构的网络框架,分别对应于简化后的多层视网膜结构:视杆视锥层,外网层,内核层,内网层与节细胞层。对于视觉信号的输入,该仿真网络同时包含了自下向上的前馈过程与自上而下的反馈过程。
对于前馈过程,可以采用异构融合的ANN单元与SNN单元,对视网膜中的各种神经动力学现象进行仿真。主要可以采用ANN单元对阳伞细胞通路进行仿真,实现基于频率编码的高质量的彩色图像信号重建。在该通路中,主要对视网膜中的视锥细胞、视锥水平细胞、双极细胞、无长突细胞与阳伞节细胞进行仿真;可以采用SNN单元对侏儒细胞通路进行仿真,实现基于时间编码的事件驱动高速光流信号。在该通路中,主要对视网膜中的视杆细胞、视杆水平细胞、双极细胞、无长突细胞与侏儒节细胞进行仿真。同时,为了模拟无长突细胞的40多种不同的感受野类型,在分层处理的基础上,采用跨层连接(前向以及反向)的设计方式,实现高效的多尺度混合感受野的特征提取。
对于反馈过程,可以采用基于局部学习的STDP(Spike Timing Dependent Plasticity,穗时序依赖型可塑性)学习规则的对内核层水平细胞的间隙连接网络进行仿真,采用基于WTA(Winner Take All,赢者通吃)规则的无监督学习算法对人眼的注意力(attention)机制进行仿真。水平细胞汇总接收到的感光细胞信号强度,测量出一定区域内视网膜上光照的平均亮度,反馈抑制信号将感光细胞的输出信号调节到合适的水平,使双极细胞接收的信号既不会太小淹没在神经通路的噪声中,也不会太大使神经通路过饱和,大幅度提高该模型的自适应能力与动态范围。
同时,如图13所示,输入经过视杆视锥层6个神经元模型的处理后,由前三个(按照从左到右的顺序)神经元模型输出多个单一色彩图像信号至外网层中的阳伞细胞通路,由后三个神经元模型输出多个灰度描述信号至外网层的侏儒细胞通路,其中,第四个神经元模型输出的灰度描述信号还同时提供至了阳伞细胞通路。
实施例六
本实施例提供一种计算机可读存储介质,其上存储有神经元模型和可执行程序,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号,当所述可执行程序被调用时,能够实现本公开所提供的上述利用神经元模型处理信号的处理方法。
实施例七
本实施例提供一种计算机可读存储介质,其上存储有神经网络和可执行程序,
所述神经网络包括至少一个模拟层,所述模拟层包括数据格式转换器件和至少一个神经元模型,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号;
当所述可执行程序被调用时,能够实现本公开所提供的上述利用神经网络处理信号的处理方法。
实施例八
本实施例提供一种电子设备,包括:
一个或多个处理器;
存储器,其上存储有神经元模型和一个或多个可执行程序,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现利用神经元模型处理信号的处理方法;
一个或多个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
实施例九
本实施例提供一种电子设备,包括:
一个或多个处理器;
存储器,其上存储有神经网络和一个或多个可执行程序,所述神经网络包括至少一个模拟层,所述模拟层包括数据格式转换器件和至少一个神经元模型,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现利用神经网络处理信号的处理方法;
一个或多个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理 器与存储器的信息交互。
在本公开上述实施例中,处理器为具有数据处理能力的器件,其包括但不限于中央处理器(CPU)等;存储器为具有数据存储能力的器件,其包括但不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH);I/O接口(读写接口)连接在处理器与存储器间,能实现处理器与存储器的信息交互,其包括但不限于数据总线(Bus)等。
在一些实施例中,处理器、存储器和I/O接口通过总线相互连接,进而与计算设备的其它组件连接。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其它数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其它存储器技术、CD-ROM、数字多功能盘(DVD)或其它光盘存储、磁盒、磁带、磁盘存储或其它磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其它的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其它传输机制之类的调制数据信号中的其它数据,并且可包括任何信息递送介质。
注意,上述仅为本公开的较佳实施例及所运用技术原理。本领域技术人员会理解,本公开不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本公开进行了较为详细的说明,但是本公开不仅仅限于以上实施例,在不脱离本公开的发明构思的情况下,还可以包括更多其他等效实施例,而本公开的范围由所附的权利要求范围决定。

Claims (25)

  1. 一种利用神经元模型处理信号的处理方法,其特征在于,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号;所述方法包括:
    接收输入信号;
    根据所述输入信号确定能够处理所述输入信号的神经网络单元;
    利用能够处理所述输入信号的所述神经网络单元对所述输入信号进行处理。
  2. 根据权利要求1所述的处理方法,其特征在于,所述根据所述输入信号确定能够处理所述输入信号的神经网络单元,包括:
    解析所述输入信号中包括的路由选择信息;
    将神经网络单元标识与所述路由选择信息中携带的神经网络单元标识一致的神经网络单元,确定为能够处理所述输入信号的神经网络单元。
  3. 根据权利要求1或2所述的处理方法,其特征在于,所述神经元模型中的神经网络单元包括:第一神经网络单元和第二神经网络单元;
    所述第一神经网络单元,用于对模拟信号进行处理并输出;
    所述第二神经网络单元,用于对脉冲信号进行处理并输出。
  4. 根据权利要求3所述的处理方法,其特征在于,所述第一神经网络单元为人工神经网络单元,所述第二神经网络单元为脉冲神经网络单元。
  5. 一种利用神经网络处理信号的处理方法,其特征在于,所述神经网络包括至少一个模拟层,所述模拟层包括数据格式转换器件和至少一个神经元模型,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号;
    所述利用神经网络处理信号的处理方法包括:
    利用初始信息生成与各个神经元模型对应的编码方式相匹配的输入信号;
    利用各个神经元模型,分别采用权利要求1至4中任意一项所述的处理方法,对相应的输入信号进行处理,其中,所述利用初始信息生成与各个神经元模型对应的编码方式相匹配的输入信号,包括:
    当所述初始信息的编码方式与处理该初始信息的神经元模型对应的编码方式不匹配时,利用数据格式转换器件将所述初始信息转换为编码类型与处理所述初始信息的神经元模型对应的编码方式相匹配的输入信号;
    当所述初始信息的编码方式与处理该初始信息的神经元模型对应的编码方式匹配时,将所述初始信息作为处理该初始信息的神经元模型的输入信号。
  6. 根据权利要求5所述的处理方法,其特征在于,所述模拟层包括多个所述神经元模型,多个所述神经元模型通过预设方式组网,直连的两个所述神经元模型通过所述格式转换器件连接;
    后一级神经元模型的初始信息为前一级神经元模型输出的信息。
  7. 根据权利要求6所述的处理方法,其特征在于,所述神经元模型中的第一神经网络单元为人工神经网络单元,所述神经元模型中的第二神经网络单元为脉冲神经网络单元,
    在前一级神经元模型中选通第二神经网络单元、后一级神经元模型中选通第一神经网络单元的情况中,所述利用数据格式转换器件将所述初始信息转换为编码类型与处理所述初始信息的神经元模型对应的编码方式相匹配的输入信号,包括:
    通过所述数据格式转换器件将前一级神经元模型分时输出的各脉冲信号进行设定时长的累加,得到模拟信号,并将所述模拟信号作为后一级神经元模型的输入 信号。
  8. 根据权利要求7所述的处理方法,其特征在于,所述数据格式转换器的输入端与前一级神经元模型的输出端相连,所述数据格式转换器的输出端与后一级神经元模型的输入端相连;或者
    所述数据格式转换器件的输入端与前一级神经元模型的最后一个隐藏层中的各输出端相连,所述数据格式转换器件的输出端与后一级神经元模型的输入端相连。
  9. 根据权利要求7所述的处理方法,其特征在于,在前一级神经元模型中选通第一神经网络单元、后一级神经元模型中选通第二神经网络单元的情况中,所述利用数据格式转换器件将所述初始信息转换为编码类型与处理所述初始信息的神经元模型对应的编码方式相匹配的输入信号,包括:
    按照设定采样规则,对前一级神经元模型输出的模拟信号进行信号采样,并将采样得到的脉冲信号作为后一级神经元模型的输入信号。
  10. 根据权利要求5至9中任意一项所述的处理方法,其特征在于,所述神经网络的所述模拟层中的至少一个为视网膜细胞模拟层,输入至所述视网膜细胞模拟层的信号包括多个单一色彩图像信号和多个灰度描述信号,以通过所述视网膜细胞模拟层得到彩色重建信号以及光流重建信号。
  11. 根据权利要求10所述的处理方法,其特征在于,所述神经网络包括多个所述视网膜细胞模拟层,
    直连的两个所述视网膜细胞模拟层中的神经元模型之间通过前向直连、反向直连、前向跨层连接和反向跨层连接中的至少一种连接方式进行连接,其中,所述前向为信号从输入至输出的传输方向。
  12. 根据权利要求11所述的处理方法,其特征在于,对于任意一个视网膜细胞模拟层而言,所述视网膜细胞模拟层中包括的至少一个神经元模型用作第一视网膜模拟神经元模型,所述视网膜细胞模拟层中的其余神经元模型用作第二视网膜模拟神经元模型,
    各个视网膜细胞模拟层的第一视网膜模拟神经元模型共同组成第一视网膜模拟通路;各个所述视网膜细胞模拟层中包括的至少一个其余神经元模型共同组成第二视网膜模拟通路;
    所述利用各个神经元模型,采用权利要求1至6中任意一项所述的处理方法,对相应的输入信号进行处理,包括:
    通过所述第一视网膜模拟通路中的神经元模型对输入的多个单一色彩图像信号以及至少一个灰度描述信号进行融合,得到彩色重建信号;
    通过所述第二视网膜模拟通路中的神经元模型对输入的多个灰度描述信号进行处理,得到光流重建信号。
  13. 根据权利要求12所述的处理方法,其特征在于,所述第一视网膜模拟通路中的各神经元模型选通第一神经网络单元;所述第二视网膜模拟通路中的各神经元模型选通第二神经网络单元。
  14. 根据权利要求12所述的处理方法,其特征在于,所述灰度描述信号包括:灰度图像信号,或者光流信号;和/或,
    输入至所述第一视网膜模拟通路中的灰度描述信号,为输入至所述第二视网膜模拟通路中的灰度描述信号的子集。
  15. 根据权利要求11所述的处理方法,其特征在于,所述视网膜细胞模拟层中各所述神经元模型适配的神经网络单元,通过无监督学习的方式训练得到;和/或
    所述视网膜细胞模拟层中各所述神经元模型所选通的神经网络单元,使用反向 传播算法、赢者通吃算法以及穗时序依赖型可塑性算法中的至少一项算法训练得到。
  16. 根据权利要求12所述的处理方法,其特征在于:
    所述单一色彩图像信号为双模态视觉传感器中彩色图像传感电路采集得到的光信号的光强的电压信号,所述灰度描述信号为双模态视觉传感器中光强变化量传感电路采集得到的光信号的光强变化量的电流信号。
  17. 根据权利要求12所述的处理方法,其特征在于,所述模拟层中的至少一者为输入信号模拟层;
    所述输入信号模拟层输出端分别与所述第一视网膜模拟通路的输入端、以及所述第二视网膜模拟通路的输入端相连;
    所述利用神经网络处理信号的处理方法还包括:
    通过所述输入信号模拟层将输入的光线模拟信号进行处理,得到多个单一色彩图像信号以及多个灰度描述信号;
    通过所述输入信号模拟层将所述多个单一色彩图像信号以及至少一个灰度描述信号发送至所述第一视网膜模拟通路的输入端,并将多个所述灰度描述信号发送至所述第二视网膜模拟通路的输入端。
  18. 根据权利要求5至9中任意一项所述的处理方法,其特征在于,所述利用神经网络处理信号的处理方法还包括:
    接收路由选择信息,所述路由选择信息包括:所述路由选择信息所表征的路径中各条神经元模型的神经元模型标识,以及各条神经元模型中选通的神经网络单元标识;
    在利用神经元模型,采用权利要求1至6中任意一项所述的处理方法,对相应的输入信号进行处理之前,所述利用神经网络处理信号的处理方法还包括:
    解析路由选择信息,以确定处理输入信息的下一跳神经元模型。
  19. 一种利用神经元模型处理信号的信号处理装置,其特征在于,所述信号处理装置包括:
    神经元模型存储模块,所述神经元模型存储模块上存储有神经元模型,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号;
    输入信号接收模块,所述输入信号接收模块用于接收输入信号;
    目标确定模块,所述目标确定模块用于根据所述输入信号确定能够处理所述输入信号的神经网络单元;
    选通器件,所述选通器件分别与各个所述神经网络单元相连,所述选通器件用于将所述输入信号接收模块与所述目标确定模块所确定的、能够处理所述输入信号的神经网络单元选通,以利用该神经网络单元对所述输入信号进行处理。
  20. 根据权利要求19所述的处理装置,其特征在于,所述选通器件为程控器件,所述处理装置还包括选通控制器件,所述选通控制器件与所述选通器件相连;
    所述选通控制器件用于根据能够处理所述输入信号的神经网络单元,向所述选通器件发送与能够处理所述输入信号的神经网络单元匹配的选通控制指令;
    所述选通器件还用于根据所述选通控制指令,选通所述神经元模型中能够处理所述输入信号的神经网络单元。
  21. 根据权利要求20所述的处理装置,其特征在于,所述选通控制器件与所述神经元模型的输入端相连;
    所述选通控制器件还用于解析输入信号中包括的路由选择信息;
    所述目标确定模块用于根据所述路由选择信息,确定待选通的神经网络单元。
  22. 一种计算机可读存储介质,其上存储有神经元模型和可执行程序,所述神 经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号,当所述可执行程序被调用时,能够实现权利要求1至4中任意一项所述的处理方法。
  23. 一种计算机可读存储介质,其上存储有神经网络和可执行程序,
    所述神经网络包括至少一个模拟层,所述模拟层包括数据格式转换器件和至少一个神经元模型,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号;
    当所述可执行程序被调用时,能够实现权利要求5至18中任意一项所述的处理方法。
  24. 一种电子设备,包括:
    一个或多个处理器;
    存储器,其上存储有神经元模型和一个或多个可执行程序,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现根据权利要求1至4中任意一项所述的处理方法;
    一个或多个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
  25. 一种电子设备,包括:
    一个或多个处理器;
    存储器,其上存储有神经网络和一个或多个可执行程序,所述神经网络包括至少一个模拟层,所述模拟层包括数据格式转换器件和至少一个神经元模型,所述神经元模型包括至少两个独立的神经网络单元,各个所述神经网络单元分别用于处理不同数据格式的信号,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现根据权利要求5至18中任意一项所述的处理方法;
    一个或多个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
PCT/CN2021/123314 2020-10-13 2021-10-12 利用神经元模型及网络处理信号的处理方法、介质、设备 WO2022078334A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011090143.8A CN112257846A (zh) 2020-10-13 2020-10-13 神经元模型、拓扑结构、信息处理方法和视网膜神经元
CN202011090143.8 2020-10-13

Publications (1)

Publication Number Publication Date
WO2022078334A1 true WO2022078334A1 (zh) 2022-04-21

Family

ID=74243139

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/123314 WO2022078334A1 (zh) 2020-10-13 2021-10-12 利用神经元模型及网络处理信号的处理方法、介质、设备

Country Status (2)

Country Link
CN (1) CN112257846A (zh)
WO (1) WO2022078334A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257846A (zh) * 2020-10-13 2021-01-22 北京灵汐科技有限公司 神经元模型、拓扑结构、信息处理方法和视网膜神经元
CN113408714B (zh) * 2021-05-14 2023-04-07 杭州电子科技大学 基于stdp法则的全数字脉冲神经网络硬件系统及方法
CN115150439B (zh) * 2022-09-02 2023-01-24 北京电科智芯科技有限公司 感知数据的解析方法、系统及存储介质、电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095966A (zh) * 2015-07-16 2015-11-25 清华大学 人工神经网络和脉冲神经网络的混合计算系统
CN105095965A (zh) * 2015-07-16 2015-11-25 清华大学 人工神经网络和脉冲神经网络神经的混合通信方法
CN105719000A (zh) * 2016-01-21 2016-06-29 广西师范大学 一种神经元硬件结构及用这种结构模拟脉冲神经网络的方法
CN109816026A (zh) * 2019-01-29 2019-05-28 清华大学 卷积神经网络和脉冲神经网络的融合结构及方法
US20190370653A1 (en) * 2016-11-22 2019-12-05 Washington University Large-scale networks of growth transform neurons
CN112257846A (zh) * 2020-10-13 2021-01-22 北京灵汐科技有限公司 神经元模型、拓扑结构、信息处理方法和视网膜神经元

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095966A (zh) * 2015-07-16 2015-11-25 清华大学 人工神经网络和脉冲神经网络的混合计算系统
CN105095965A (zh) * 2015-07-16 2015-11-25 清华大学 人工神经网络和脉冲神经网络神经的混合通信方法
CN105719000A (zh) * 2016-01-21 2016-06-29 广西师范大学 一种神经元硬件结构及用这种结构模拟脉冲神经网络的方法
US20190370653A1 (en) * 2016-11-22 2019-12-05 Washington University Large-scale networks of growth transform neurons
CN109816026A (zh) * 2019-01-29 2019-05-28 清华大学 卷积神经网络和脉冲神经网络的融合结构及方法
CN112257846A (zh) * 2020-10-13 2021-01-22 北京灵汐科技有限公司 神经元模型、拓扑结构、信息处理方法和视网膜神经元

Also Published As

Publication number Publication date
CN112257846A (zh) 2021-01-22

Similar Documents

Publication Publication Date Title
WO2022078334A1 (zh) 利用神经元模型及网络处理信号的处理方法、介质、设备
Wu et al. Liaf-net: Leaky integrate and analog fire network for lightweight and efficient spatiotemporal information processing
CN112651511B (zh) 一种训练模型的方法、数据处理的方法以及装置
US9436908B2 (en) Apparatus and methods for rate-modulated plasticity in a neuron network
KR20210127133A (ko) 탄력적인 뉴럴 네트워크
CN109496294A (zh) 人工智能处理装置的编译方法及系统、存储介质及终端
Javanshir et al. Advancements in algorithms and neuromorphic hardware for spiking neural networks
TW201633181A (zh) 用於經非同步脈衝調制的取樣信號的事件驅動型時間迴旋
Cordone et al. Learning from event cameras with sparse spiking convolutional neural networks
CN112990485A (zh) 基于强化学习的知识策略选择方法与装置
Addanki et al. Placeto: Efficient progressive device placement optimization
CN105701540A (zh) 一种自生成神经网络构建方法
CN113191479A (zh) 联合学习的方法、系统、节点及存储介质
Zhang et al. Recent advances and new frontiers in spiking neural networks
CN112288080A (zh) 面向脉冲神经网络的自适应模型转化方法及系统
Xiao et al. Graph attention mechanism based reinforcement learning for multi-agent flocking control in communication-restricted environment
CN109491956B (zh) 一种异构协同计算系统
US9412051B1 (en) Neuromorphic image processing exhibiting thalamus-like properties
Huang et al. Real-time radar gesture classification with spiking neural network on SpiNNaker 2 prototype
CN110610231A (zh) 一种信息处理方法、电子设备和存储介质
US20210042621A1 (en) Method for operation of network model and related product
Gerlinghoff et al. Desire backpropagation: A lightweight training algorithm for multi-layer spiking neural networks based on spike-timing-dependent plasticity
CN109635942B (zh) 一种仿脑兴奋态和抑制态工作状态神经网络电路结构及方法
CN116343342B (zh) 手语识别方法、系统、装置、电子设备及可读存储介质
CA2898216C (en) Methods and systems for implementing deep spiking neural networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21879378

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21879378

Country of ref document: EP

Kind code of ref document: A1