CN117709413A - Port model object calling method, system, platform, intelligent device and storage medium - Google Patents
Port model object calling method, system, platform, intelligent device and storage medium Download PDFInfo
- Publication number
- CN117709413A CN117709413A CN202211068021.8A CN202211068021A CN117709413A CN 117709413 A CN117709413 A CN 117709413A CN 202211068021 A CN202211068021 A CN 202211068021A CN 117709413 A CN117709413 A CN 117709413A
- Authority
- CN
- China
- Prior art keywords
- model object
- port
- synapse
- neuron
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000013528 artificial neural network Methods 0.000 claims abstract description 92
- 230000003993 interaction Effects 0.000 claims abstract description 22
- 210000000225 synapse Anatomy 0.000 claims description 88
- 210000002569 neuron Anatomy 0.000 claims description 62
- 230000000946 synaptic effect Effects 0.000 claims description 30
- 230000003956 synaptic plasticity Effects 0.000 claims description 25
- 238000006243 chemical reaction Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 11
- 230000004007 neuromodulation Effects 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 2
- 238000010276 construction Methods 0.000 abstract description 11
- 238000010586 diagram Methods 0.000 description 28
- 239000003795 chemical substances by application Substances 0.000 description 15
- 238000003062 neural network model Methods 0.000 description 11
- 230000007246 mechanism Effects 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 210000004556 brain Anatomy 0.000 description 4
- VYFYYTLLBUKUHU-UHFFFAOYSA-N dopamine Chemical compound NCCC1=CC=C(O)C(O)=C1 VYFYYTLLBUKUHU-UHFFFAOYSA-N 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- QZAYGJVTTNCVMB-UHFFFAOYSA-N serotonin Chemical compound C1=C(O)C=C2C(CCN)=CNC2=C1 QZAYGJVTTNCVMB-UHFFFAOYSA-N 0.000 description 4
- 230000002829 reductive effect Effects 0.000 description 3
- WHUUTDBJXJRKMK-VKHMYHEASA-N L-glutamic acid Chemical compound OC(=O)[C@@H](N)CCC(O)=O WHUUTDBJXJRKMK-VKHMYHEASA-N 0.000 description 2
- OIPILFWXSMYKGL-UHFFFAOYSA-N acetylcholine Chemical compound CC(=O)OCC[N+](C)(C)C OIPILFWXSMYKGL-UHFFFAOYSA-N 0.000 description 2
- 229960004373 acetylcholine Drugs 0.000 description 2
- 230000001149 cognitive effect Effects 0.000 description 2
- 210000001787 dendrite Anatomy 0.000 description 2
- 229960003638 dopamine Drugs 0.000 description 2
- 229930195712 glutamate Natural products 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- SFLSHLFXELFNJZ-QMMMGPOBSA-N (-)-norepinephrine Chemical compound NC[C@H](O)C1=CC=C(O)C(O)=C1 SFLSHLFXELFNJZ-QMMMGPOBSA-N 0.000 description 1
- 108010025020 Nerve Growth Factor Proteins 0.000 description 1
- 102000007072 Nerve Growth Factors Human genes 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 239000003900 neurotrophic factor Substances 0.000 description 1
- 229960002748 norepinephrine Drugs 0.000 description 1
- SFLSHLFXELFNJZ-UHFFFAOYSA-N norepinephrine Natural products NCC(O)C1=CC=C(O)C(O)=C1 SFLSHLFXELFNJZ-UHFFFAOYSA-N 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 229940076279 serotonin Drugs 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Abstract
The invention discloses a port model object calling method, a port model object calling system, a port model object calling platform, an intelligent device and a medium. The method comprises the following steps: invoking at least two preset model objects prestored in a system, wherein each preset model object is a port model object, the port model object comprises at least one port and a model main body, the model main body is associated with data and/or operation, and the port is used for carrying out information interaction or association with the outside of the port model object; and setting at least two preset model objects to perform association or information interaction through at least one port of each preset model object. The invention can effectively improve the construction efficiency of the neural network and reduce the construction cost.
Description
Technical Field
The invention relates to the technical field of neural networks, in particular to a port model object calling method, a system, a platform, an intelligent device and a storage medium.
Background
In building a neural network model object, a builder typically builds a system with the aid of a neural network to reduce the cost of building the neural network model object and to increase the efficiency of building the neural network model object. The existing neural network construction system can provide a multilingual construction environment, so that a constructor can select a more familiar environment to construct a neural network model object.
Because the brain-like neural network model is to reference the biological brain, the connection modes of the neural loop, the neuron and the synapse of the biological brain are very various and complex, the modeling cost of the user is high, and the user needs to deal with a large amount of writing work during writing, so that the model is easy to make mistakes, has low efficiency and increases the construction cost of the neural network model object.
Disclosure of Invention
In view of the above, it is necessary to provide a port model object calling method, a system, a platform, an intelligent device, and a storage medium.
A method of port model object invocation, comprising:
invoking at least two preset model objects prestored in a system, wherein each preset model object is a port model object, the port model object comprises at least one port and a model main body, the model main body is associated with data and/or operation, and the port is used for carrying out information interaction or association with the outside of the port model object;
and setting at least two preset model objects to perform association or information interaction through at least one port of each preset model object.
A port model object invocation system, comprising:
the model object module is used for calling at least one preset model object prestored in the system, wherein each preset model object is a port model object, the port model object comprises at least one port and a model main body, the model main body is associated with data and/or operation, and the port is used for carrying out information interaction or association with the outside of the port model object;
The setting module is used for setting at least two preset model objects to be associated or interacted with each other through the ports.
A neural network build platform, comprising: a front end portion, a core portion, and a rear end portion;
the front-end portion is used for acting the outside of the platform to interact with the core portion and/or the back-end portion;
the back end part is used for running a target model;
the core part is used for responding to the request of the front end part and/or the back end part and describing and/or organizing the target model.
A smart device comprising a memory having a computer program stored therein and a processor for executing the computer program to perform the method as described above.
A storage medium storing a computer program capable of being loaded by a processor and executing a method as described above.
The embodiment of the invention has the following beneficial effects:
calling at least one preset model object pre-stored in the system, and combining the at least one preset model object according to a topological structure corresponding to the network topological structure instruction to obtain a combined model object; at least one agent port is added for the combined model object to generate a neural network topological structure, a user can directly use a ready-made preset model object without compiling the model object, the process that the user needs to repeatedly write a large number of models/codes when constructing the neural network is omitted, the construction efficiency of the neural network can be effectively improved, and the construction cost is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will briefly introduce the drawings required for the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Wherein:
FIG. 1 is a flowchart of a first embodiment of a method for invoking port model objects provided by the present invention;
FIG. 2 is a schematic structural view of a first embodiment of a preset model object provided by the present invention;
FIG. 3 is a schematic structural view of a second embodiment of a preset model object provided by the present invention;
FIG. 4 is a schematic diagram of a feed-forward structure according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a cross-layer structure according to an embodiment of the present invention;
FIG. 6 is a schematic view of one embodiment of a lateral structure provided by the present invention;
FIG. 7 is a schematic diagram of an embodiment of a feedback structure provided by the present invention;
FIG. 8 is a schematic diagram of a self-circulation structure according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of an embodiment of a neuron-neuron direct structure according to the present invention;
FIG. 10 is a schematic diagram of an embodiment of a synapse-synapse direct structure provided in the present disclosure;
FIG. 11 is a schematic diagram of an embodiment of a synapse-dendritic alignment structure provided in the present invention;
fig. 12 is a schematic structural diagram of a first embodiment of a neural network topology provided by the present invention;
fig. 13 is a schematic structural diagram of a second embodiment of a neural network topology provided by the present invention;
fig. 14 is a schematic structural diagram of a third embodiment of a neural network topology provided by the present invention;
FIG. 15 is a schematic diagram illustrating the structure of an embodiment of a port model object invocation system provided by the present invention;
FIG. 16 is a schematic diagram illustrating the structure of an embodiment of a neural network build platform according to the present invention;
fig. 17 is an internal structural view of the mid-intelligent device provided by the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 in combination, fig. 1 is a flowchart illustrating a first embodiment of a method for calling a port model object according to the present invention.
S101: and calling at least two preset model objects pre-stored in the system, wherein each preset model object comprises at least one port, each port model object comprises at least one port and a model main body, the model main bodies are associated with data and/or operations, and the ports are used for carrying out information interaction or association with the outside of the port model objects.
In a specific implementation scenario, a plurality of preset model objects are prestored in the neural network building platform 100, and at least two preset model objects may be selected and invoked by a user, or two preset objects may be automatically invoked by a system. The preset model object can also be a user-defined model object which is designed by the user, so that the user can conveniently call the model object when constructing the neural network later, the process that the user needs to repeatedly write a large number of models/codes is omitted, and the construction efficiency of the user can be effectively improved. Each preset model object is a port model object, the port model object comprises at least one port and a model main body, the model main body is associated with data and/or operation, and the port is used for carrying out information interaction or association with the outside of the port model object.
Referring to fig. 2 in combination, fig. 2 is a schematic structural diagram of a first embodiment of a preset model object according to the present invention. The preset model object comprises at least one port, and is associated with data and/or operation, the operation is an actual dynamic process represented by the operation, and output is generated and output to the downstream after receiving input, wherein the data expresses the characteristics of the preset model object, and the characteristics of the preset model object can be specific forms such as variable values, character strings, tensors and the like. In one implementation scenario, the ports include at least one of an input port input1, an output port output1, a reference port reference, and a connection port matrix.
Referring to fig. 3 in combination, fig. 3 is a schematic structural diagram of a second embodiment of a preset model object provided in the present invention. The preset model object may also be a structure including at least one basic port module as shown in fig. 2 or other modules, where the at least one basic model object can perform association or information interaction through at least one respective port. The input ports comprise proxy input ports and the output ports comprise proxy output ports. The proxy input port is connected to the input port of at least one basic model object, and the proxy output port is connected to the output port of at least one basic model object, and the specific port connection mode may be preset by a user or directly generated according to the relationship between the basic model objects, which is not limited herein.
In one implementation scenario, the preset model objects include at least one of neuron model objects, synapse model objects, dendritic model objects, plasticity model objects, neuromodulation model objects. For example, the neuron model object may comprise LIF, hodgkin-Huxley, reLU, etc., models, and may use partial differential equations, or may describe its dynamics in Euler fashion or other functions. As another example, the synaptic model object may comprise a linear synaptic model (with only the functions of weighting and delivering pulses), a delayed synaptic model (with the functions of weighting, delivering pulses, delaying delivery), etc. For another example, the dendrite model object may include a shunting inhibition model. As another example, the plasticity model objects may include models of Hubble plasticity (Hebbian plasticity), anti-Hubble plasticity (anti-Hebbian plasticity), pulse time dependent plasticity (STDP), and the like. For another example, the neuromodulation model object may include a model of the modulation mechanism of Glutamate (Glutamate), acetylcholine (ACh), serotonin (5-HT), norepinephrine (NA), dopamine (DA), neurotrophic factor, and the like.
S102: and setting at least two preset model objects to perform association or information interaction through respective ports.
In a specific implementation scenario, at least two preset model objects are set to associate or interact information through respective ports. For example, the output port of the preset model object a is connected to the input port of the preset model object B, so as to transmit the output information of the preset model object a to the preset model object. For another example, the reference port of the preset model object C is connected to the reference port of the preset model object D, so as to associate the preset model object C with the preset model object D.
In one implementation scenario, the neural network topology may be formed by associating or information interaction of at least two preset model objects through ports based on the network topology instruction. The network topology instructions may be user entered or automatically generated by the system. The network topology instruction may include a neural network topology that is self-defined by the user, or may be a neural network topology that invokes a pre-stored neural network build platform 100. For example, at least one preset model object and/or at least one user-set custom model object is set at a node position of a preset topological structure designated by a user, and connection is performed according to the positions of the preset model objects or custom model objects in the preset topological structure and the connection relationship or authority relationship between the ports, so that a neural network topological structure is formed. The description capability and the expansion capability of the neural network topology structure are increased through the ports, the neural network topology structure is modeled uniformly through the ports, the expansion capability of the neural network topology structure is ensured by supporting the expansion of the port modes, the expansion can be performed as required, the modeling requirements which are changed continuously are met, the efficiency of constructing the neural network by a user is effectively improved, and the construction cost is reduced.
In one implementation, the topology includes at least one of a feed-forward structure, a feedback structure, a loop structure, a cross-layer structure, a lateral structure, a self-loop structure, a neuron-neuron direct structure, a synapse-synapse direct structure, a synapse-dendritic direct structure, and a loop structure. Referring to fig. 4-11 in combination, fig. 4 is a schematic structural diagram of an embodiment of a feed-forward structure according to the present invention. Fig. 5 is a schematic structural diagram of an embodiment of a cross-layer structure provided in the present invention. Fig. 6 is a schematic structural view of an embodiment of a lateral structure provided by the present invention. Fig. 7 is a schematic structural diagram of an embodiment of a feedback structure provided by the present invention. Fig. 8 is a schematic structural diagram of an embodiment of a self-circulation structure provided by the present invention. Fig. 9 is a schematic structural diagram of an embodiment of a neuron-neuron direct connection structure according to the present invention. FIG. 10 is a schematic diagram of an embodiment of a synapse-synapse direct structure provided in the present disclosure. FIG. 11 is a schematic diagram of a structure of an embodiment of a synapse-dendritic alignment structure provided by the present invention. In fig. 4-11, the neuron model objects are represented by circles, the neuron group model objects are represented by boxes, and the synapse model objects are represented by small circles plus straight lines.
In one implementation scenario, a neural network topology is obtained, and a neuron model object, a neuron group model object, and a synapse model object are organized according to the neural network topology. For example, if the neural network topology is a direct connection from synapse to synapse, the user invokes the neuron model objects a, b, c and the synapse model objects a and c via the model object invocation instructions, and connects the neuron model objects a, b, c and the synapse model objects a and c according to the structure shown in fig. 10, and connects the ports of the respective neuron model objects and synapse model objects, so as to form a combined model object having the topology shown in fig. 10.
In other implementation scenarios, at least one proxy port is added after the neural network topology is formed, the proxy port includes a proxy input port or a proxy output port, the proxy input port is connected to an input port of at least one preset model object in the neural network topology, and the proxy output port is connected to an output port of at least one preset model object in the neural network topology.
In one implementation scenario, the loop structure includes a neuron-synapse-neuron-synapse plastic loop structure referring to fig. 12, and fig. 12 is a schematic structural diagram of a first embodiment of the neural network topology provided in the present invention. The neural network topology is a neuron-synapse-neuron-synapse plastic loop structure. Invoking the neuron group model object a, the neuron group model object b, the synapse group model object a, and the synapse plasticity model object a.
The output port of the neuron group model object a is connected with the input port input of the synapse group model object a and the input port pre of the synapse plasticity model object a respectively, the connection port matrix of the synapse group model object a and the synapse plasticity model object a is connected, the output port of the synapse group model object a is connected with the input port input of the neuron group model object b, and the other input port next of the synapse plasticity model object a is connected with the output port output of the neuron group model object b, so that the neural network topology shown in fig. 12 is formed.
And adding an agent input port input and an agent output port output for the neural network topological structure, wherein the agent input port input is connected with the input port of the neuron group model object a, and the agent output port output is connected with the output end of the neuron group model object b in a buckling way, so that the neural network topological structure is generated.
In the method, the synaptic plasticity and the synapse are independently modeled, the synaptic plasticity model object is bound with the specific synapse group model object through a connection port matrix, the synaptic plasticity model object and the specific synapse group model object are decoupled to a great extent structurally, and meanwhile, the synapse plasticity model object and the specific synapse group model object are associated through a port mechanism, so that the flexibility based on the port mechanism and the expandability of a topological structure are fully embodied.
In one implementation, the loop structure includes a neuron-synapse-dendrite-neuron-synapse plastic loop structure. Referring to fig. 13 in combination, fig. 13 is a schematic structural diagram of a neural network topology according to a second embodiment of the present invention. The neural network topology is a neuron-synapse-dendritic-neuron-synapse plastic loop structure. Invoking a neuron group model object c, a neuron group model object d, a synapse group model object b, a synapse plasticity model object b, a dendrite group model object a, and a direct conversion model object.
The output port of the neuron group model object c is connected with the input port input of the synapse group model object b and the input port pre of the synapse plasticity model object b, the connection port matrix of the synapse group model object b and the connection port matrix of the synapse plasticity model object b are connected, the output port output of the synapse group model object b and the input port input of the synapse group model object a are connected, the output port pre of the direct conversion model object is connected with the other input port next of the synapse plasticity model object b, the output port output of the synapse group model object a is connected with the input port input of the neuron group model object d, and the input port next of the direct conversion model object is connected with the output port output of the synapse plasticity model object a, thereby forming the neural network topology shown in fig. 13.
And adding an agent input port input and an agent output port output for the neural network topological structure, wherein the agent input port input is connected with the input port input of the neuron group model object a, and the agent output port output is connected with the output port output of the neuron group model object b to generate the neural network topological structure.
In this application, to ensure the backflow of information, for example, the synaptic plasticity model object may depend on the states of the upstream and downstream neurons, but after the synaptic model object is introduced, the direct topology structure is destroyed, and to ensure the uniformity of the model, after the dendritic model object is constructed, the support of ConnectionTransition is added, so that a one-to-one correct association mapping is provided. ConnectionTransition is a general form of connection conversion, expands basic model objects, binds with a specific connection model through a connection port matrix, and decouples with the specific connection model to a certain extent, and has the main functions of providing information backflow and guaranteeing model consistency.
In one implementation, the loop structure includes a neuron-synapse-neuron-synapse plasticity multi-path merging loop structure. Referring to fig. 14 in combination, fig. 14 is a schematic structural diagram of a neural network topology according to a third embodiment of the present invention. The neural network topology is a neuron-synapse-neuron-synapse plasticity multi-path merging loop structure. Invoking a neuron group model object e, a neuron group model object f, a neuron group model object g, a synapse group model object c, a synapse group model object d, a synapse plasticity model object c, and a synapse plasticity model object d.
The output port output of the neuron group model object e is connected with the input port input of the synaptic group model object c and the input port pre of the synaptic plasticity model object c respectively, the output port output of the synaptic group model object f is connected with the input port input of the synaptic group model object d and the input port pre of the synaptic plasticity model object d respectively, the output port output of the synaptic group model object c is connected with the input port input0 of the simplified model object, the output port output of the synaptic group model object d is connected with the input port input1 of the simplified model object, the output port out of the simplified model object is connected with the input port input of the neuron group model object g, the other input port next of the synaptic plasticity model object c is connected with the input port input of the neuron group model object g, and the other input port next of the synaptic group model object 14 is connected with the output port input of the neuron group model object.
And adding agent input ports input0 and input1 and an agent output port output to the combined model object, wherein the agent input port input0 is connected with the input port of the neuron group model object e, the agent input port input1 is connected with the input port of the neuron group model object f, and the agent output port output is connected with the output port of the neuron group model object g in a buckling way, so that a neural network topology structure is generated.
As can be seen from the above description, at least one preset model object pre-stored in the system is called, and the at least one preset model object is combined according to a topology structure corresponding to the network topology structure instruction to obtain a combined model object; at least one agent port is added for the combined model object to generate a neural network topological structure, a user can directly use a ready-made preset model object without compiling the model object, the process that the user needs to repeatedly write a large number of models/codes when constructing the neural network is omitted, the construction efficiency of the neural network can be effectively improved, and the construction cost is reduced.
Referring to fig. 15 in combination, fig. 15 is a schematic structural diagram of an embodiment of a port model object calling system according to the present invention. The port model object calling system 10 includes a model object module 11 and a setting module 12
The model object module 11 invokes at least one preset model object pre-stored in the system, where each preset model object is a port model object, and the port model object includes at least one port and a model body, where the model body is associated with data and/or operations, and the port is used for information interaction or association with the outside of the port model object. The setting module 12 is configured to set at least two preset model objects to associate or interact information through respective ports.
Wherein the ports include at least one of an input port, an output port, a reference port, and a connection port.
The preset model object comprises at least one of a neuron model object, a synapse model object, a dendritic model object, a plasticity model object and a nerve modulation model object.
The setting module 12 is further configured to construct a neural network topology structure by associating or interacting information of at least two preset model objects through the ports.
The neural network topology structure comprises at least one of a feed-forward structure, a feedback structure, a circulation structure, a cross-layer structure, a lateral structure, a self-circulation structure, a neuron-neuron direct connection structure, a synapse-synapse direct connection structure, a synapse-dendritic direct connection structure and a loop structure.
The loop structure includes that the neural network topology is a neuron-synapse-neuron-synapse plasticity loop structure; when the neural network topology is a neuron-synapse-neuron-synapse plastic loop structure, the setting module 12 is further configured to connect the output port of the first neuron group model object with the input port of the first synapse group model object and the input port of the first synapse plastic model object, connect the first synapse group model object with the connection port of the first synapse plastic model object, connect the output port of the first synapse group model object with the input port of the second neuron group model object, and connect the other input port of the first synapse plastic model object with the output port of the first neuron group model object.
The circuit structure comprises a neuron-synapse-dendritic-neuron-synapse plastic circuit structure; when the neural network topology is a neuron-synapse-dendrite-neuron-synapse plastic loop structure, the setting module 12 is further configured to connect the output port of the third neuron group model object with the input port of the second synapse group model object and the input port of the second synapse plastic model object, connect the connection port of the second synapse group model object with the connection port of the second synapse plastic model object, connect the output port of the first synapse group model object with the input port of the first synapse group model object, connect the output port of the direct conversion model object with the other input port of the second synapse plastic model object, connect the output port of the first synapse group model object with the input port of the fourth synapse group model object, and connect the input port of the direct conversion model object with the output port of the first synapse plastic model object.
The loop structure comprises a neuron-synapse-neuron-synapse plasticity multi-path merging loop structure; when the neural network topology is a neuron-synapse-neuron-synaptic plasticity multi-path merging loop structure, the setting module 12 is further configured to connect the output port of the fifth neuron group model object with the input port of the third synapse group model object and the input port of the third synapse plasticity model object, connect the output port of the third synapse group model object with the connection port of the third synapse plasticity model object, connect the output port of the sixth neuron group model object with the input port of the fourth synapse group model object and the input port of the fourth synapse plasticity model object, connect the output port of the fourth synapse group model object with the connection port of the fourth synapse plasticity model object, connect the output port of the third synapse group model object with the first input port of the simplified model object, connect the output port of the fourth synapse group model object with the second input port of the simplified model object, connect the output port of the simplified model object with the input port of the seventh neuron group model object, connect the other output port of the third synapse group model object with the output port of the seventh synapse model object.
The setting module 12 is further configured to add at least one proxy port to the neural network topology, where the proxy port includes a proxy input port or a proxy output port, the proxy input port is connected to an input port of at least one preset model object in the neural network topology, and the proxy output port is connected to an output port of at least one preset model object in the neural network topology.
The neural network topology structure comprises a neural network topology structure prestored in the system and a neural network topology structure customized by a user.
The setting module 12 is further configured to obtain a custom model object designed by a user, and associate or interact information with at least one preset model object and at least one custom model object through respective ports.
Fig. 16 is a schematic structural diagram of an embodiment of a neural network building platform according to the present invention, and fig. 1 is applied to the neural network building platform shown in fig. 16. The neural network build platform 100 includes a front-end portion 200, a core portion 300, and a back-end portion 400. The front-end portion 200 is used to interact with the core portion 300 and/or the back-end portion 400 outside of the proxy neural network build platform 100; the back end section 400 is used to run the target model; the core portion 300 is configured to perform description and/or organization of the object model in response to a request from the front end portion 200 and/or the back end portion 300, and the core portion 300 includes a port module 301, where the port module 301 is capable of implementing the port model object invocation method as described above.
In one implementation scenario, the target model refers to a neural network model, which may be a deep neural network model, a firing rate neural network model, a impulse neural network model, and mixtures thereof. In one possible implementation, the impulse neural network model includes, but is not limited to, an upper level brain-like model, an upper level cognitive model. In one implementation scenario, the upper level brain-like model includes, but is not limited to, a whole brain model, a brain region level model, a loop model, and the like. In one possible implementation, the upper-level cognitive model includes, but is not limited to, language mechanisms, memory mechanisms, motor mechanisms, decision mechanisms, and the like.
Fig. 17 shows an internal structural diagram of the smart device in one embodiment. The intelligent device can be a terminal or a server. As shown in fig. 17, the smart device includes a processor, a memory, and a network interface connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the smart device stores an operating system and may also store a computer program that, when executed by a processor, causes the processor to implement an age identification method. The internal memory may also have stored therein a computer program which, when executed by the processor, causes the processor to perform the age identification method. It will be appreciated by those skilled in the art that the structure shown in fig. 17 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the smart device to which the present application is applied, and that a particular smart device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a smart device is provided that includes a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the method steps as described above.
In an embodiment, a computer-readable storage medium is proposed, storing a computer program which, when executed by a processor, causes the processor to perform the method steps as above.
Those skilled in the art will appreciate that the processes implementing all or part of the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, and the program may be stored in a non-volatile computer readable storage medium, and the program may include the processes of the embodiments of the methods as above when executed. Any reference to memory, storage, database, or other medium invoked in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not thereby to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.
Claims (25)
1. A method for invoking a port model object, comprising:
invoking at least two preset model objects prestored in a system, wherein each preset model object is a port model object, the port model object comprises at least one port and a model main body, the model main body is used for associating data and/or operation, and the port is used for carrying out information interaction or association with the outside of the port model object;
And setting at least two preset model objects to perform association or information interaction through at least one port of each preset model object.
2. The port model object invocation method of claim 1, wherein the port comprises at least one of an input port, an output port, a reference port, and a connection port.
3. The port model object invocation method of claim 1, wherein the preset model object comprises at least one of a neuron model object, a synaptic model object, a dendritic model object, a plastic model object, and a neuromodulation model object.
4. A method for invoking port model objects according to claim 3, wherein said step of setting at least two of said preset model objects to associate or interact information via respective said ports comprises:
and forming a neural network topological structure by the association or information interaction of at least two preset model objects through the ports.
5. The port model object invocation method of claim 4, wherein the neural network topology comprises at least one of a feed-forward structure, a feedback structure, a loop structure, a cross-layer structure, a lateral structure, a self-loop structure, a neuron-neuron direct structure, a synapse-synapse direct structure, a synapse-dendritic direct structure, and a loop structure.
6. The port model object invocation method of claim 5, wherein the loop structure comprises a neural network topology that is a neuron-synapse-neuron-synapse plastic loop structure;
when the neural network topology is a neuron-synapse-neuron-synapse plastic loop structure, the step of forming the neural network topology by association or information interaction of at least two preset model objects through the ports comprises the following steps:
the method comprises the steps of connecting an output port of a first neuron group model object with an input port of a first synaptic group model object and an input port of a first synaptic plasticity model object, connecting the first synaptic group model object with a connection port of the first synaptic plasticity model object, connecting an output port of the first synaptic group model object with an input port of a second neuron group model object, and connecting the other input port of the first synaptic plasticity model object with an output port of the first neuron group model object.
7. The port model object invocation method of claim 5, wherein the loop structure comprises a neuron-synapse-dendritic-neuron-synapse plastic loop structure;
When the neural network topology is a neuron-synapse-dendrite-neuron-synapse plastic loop structure, the step of forming the neural network topology by association or information interaction of at least two preset model objects through the ports includes:
the method comprises the steps of connecting an output port of a third neuron group model object with an input port of a second synapse group model object and an input port of a second synapse plasticity model object, connecting the second synapse group model object with a connection port of the second synapse plasticity model object, connecting a connection port of a first synapse group model object with a connection port of a direct conversion model object, connecting an output port of the second synapse group model object with an input port of the first synapse group model object, connecting an output port of the direct conversion model object with another input port of the second synapse plasticity model object, connecting an output port of the first synapse group model object with an input port of a fourth synapse plasticity model object, and connecting an input port of the direct conversion model object with an output port of the first synapse plasticity model object.
8. The port model object invocation method of claim 5, wherein the loop structure comprises a neuron-synapse-neuron-synapse plasticity multi-way merge loop structure;
when the neural network topology structure is a neuron-synapse-neuron-synapse plasticity multi-path merging loop structure, the step of forming the neural network topology structure by association or information interaction of at least two preset model objects through the ports comprises the following steps:
the method comprises the steps of connecting an output port of a fifth neuron group model object with an input port of a third synaptic group model object and an input port of a third synaptic plasticity model object, connecting the third synaptic group model object and the connection port of the third synaptic plasticity model object, connecting an output port of a sixth neuron group model object with an input port of a fourth synaptic group model object and an input port of a fourth synaptic plasticity model object, connecting the connection port of the fourth synaptic group model object and the fourth synaptic plasticity model object, connecting an output port of the third synaptic group model object with a first input port of a simplified model object, connecting an output port of the fourth synaptic group model object with a second input port of the simplified model object, connecting an output port of the simplified model object with an input port of a seventh neuron group model object, connecting another input port of the third synaptic plasticity model object with an output port of the seventh neuron group model object, and connecting another output port of the fourth synaptic plasticity model object with the input port of the seventh neuron group model object.
9. The method for invoking port model objects according to claim 4, wherein after said step of constructing a neural network topology by association or information interaction of at least two of said preset model objects through said ports, comprising:
and adding at least one proxy port for the neural network topology structure, wherein the proxy port comprises a proxy input port or a proxy output port, the proxy input port is connected with the input port of at least one preset model object in the neural network topology structure, and the proxy output port is connected with the output port of at least one preset model object in the neural network topology structure.
10. The method for invoking the port model object according to claim 4, wherein the neural network topology comprises a pre-stored neural network topology and a user-defined neural network topology in the system.
11. The method for invoking port model objects according to claim 1, wherein said step of setting at least two of said preset model objects to associate or interact information through respective said ports comprises:
and acquiring a user-designed custom model object, and carrying out association or information interaction on at least one preset model object and at least one custom model object through respective ports.
12. A port model object invocation system, comprising:
the model object module is used for calling at least one preset model object prestored in the system, wherein each preset model object is a port model object, the port model object comprises at least one port and a model main body, the model main body is associated with data and/or operation, and the port is used for carrying out information interaction or association with the outside of the port model object;
the setting module is used for setting at least two preset model objects to be associated or interacted with each other through the ports.
13. The port model object invocation system of claim 12, wherein the port comprises at least one of an input port, an output port, a reference port, and a connection port.
14. The port model object invocation system of claim 13, wherein the pre-set model object comprises at least one of a neuron model object, a synaptic model object, a dendritic model object, a plastic model object, a neuromodulation model object.
15. The port model object invocation system of claim 14, wherein the setup module is further configured to:
And forming a neural network topological structure by the association or information interaction of at least two preset model objects through the ports.
16. The port model object invocation system of claim 15, wherein the neural network topology comprises at least one of a feed-forward structure, a feedback structure, a loop structure, a cross-layer structure, a lateral structure, a self-loop structure, a neuron-neuron direct structure, a synapse-synapse direct structure, a synapse-dendritic direct structure, and a loop structure.
17. The port model object invocation system of claim 16, wherein the loop structure comprises a neural network topology that is a neuron-synapse-neuron-synapse plastic loop structure;
when the neural network topology is a neuron-synapse-neuron-synapse plastic loop structure, the setup module is further to:
the method comprises the steps of connecting an output port of a first neuron group model object with an input port of a first synaptic group model object and an input port of a first synaptic plasticity model object, connecting the first synaptic group model object with a connection port of the first synaptic plasticity model object, connecting an output port of the first synaptic group model object with an input port of a second neuron group model object, and connecting the other input port of the first synaptic plasticity model object with an output port of the first neuron group model object.
18. The port model object invocation system of claim 16, wherein the loop structure comprises a neural network topology that is a neuron-synapse-dendrite-neuron-synapse plasticity loop structure;
when the neural network topology is a neuron-synapse-dendrite-neuron-synapse plastic loop structure, the setup module is further configured to:
the method comprises the steps of connecting an output port of a third neuron group model object with an input port of a second synapse group model object and an input port of a second synapse plasticity model object, connecting the second synapse group model object with a connection port of the second synapse plasticity model object, connecting a connection port of a first synapse group model object with a connection port of a direct conversion model object, connecting an output port of the second synapse group model object with an input port of the first synapse group model object, connecting an output port of the direct conversion model object with another input port of the second synapse plasticity model object, connecting an output port of the first synapse group model object with an input port of a fourth synapse plasticity model object, and connecting an input port of the direct conversion model object with an output port of the first synapse plasticity model object.
19. The port model object invocation system of claim 16, wherein the loop structure comprises a neural network topology that is a neuron-synapse-neuron-synapse plasticity multi-path merge loop structure;
when the neural network topology is a neuron-synapse-neuron-synapse plasticity multi-path merging loop structure, the setting module is further configured to:
the method comprises the steps of connecting an output port of a fifth neuron group model object with an input port of a third synaptic group model object and an input port of a third synaptic plasticity model object, connecting the third synaptic group model object and the connection port of the third synaptic plasticity model object, connecting an output port of a sixth neuron group model object with an input port of a fourth synaptic group model object and an input port of a fourth synaptic plasticity model object, connecting the connection port of the fourth synaptic group model object and the fourth synaptic plasticity model object, connecting an output port of the third synaptic group model object with a first input port of a simplified model object, connecting an output port of the fourth synaptic group model object with a second input port of the simplified model object, connecting an output port of the simplified model object with an input port of a seventh neuron group model object, connecting another input port of the third synaptic plasticity model object with an output port of the seventh neuron group model object, and connecting another output port of the fourth synaptic plasticity model object with the input port of the seventh neuron group model object.
20. The port model object invocation system of claim 15, wherein the setup module is further configured to:
and adding at least one proxy port for the neural network topology structure, wherein the proxy port comprises a proxy input port or a proxy output port, the proxy input port is connected with the input port of at least one preset model object in the neural network topology structure, and the proxy output port is connected with the output port of at least one preset model object in the neural network topology structure.
21. The port model object invocation system of claim 15, wherein the neural network topology comprises a pre-stored neural network topology and a user-defined neural network topology in the system.
22. The port model object invocation system of claim 12, wherein the setup module is further configured to:
and acquiring a user-designed custom model object, and carrying out association or information interaction on at least one preset model object and at least one custom model object through respective ports.
23. A neural network build platform, comprising: a front end portion, a core portion, and a rear end portion;
The front-end portion is used for acting the outside of the platform to interact with the core portion and/or the back-end portion;
the back end part is used for running a target model;
the core part for describing and/or organizing the object model in response to a request of the front-end part and/or the back-end part, the core part comprising a port module for implementing the method of any of claims 1-11.
24. A smart device comprising a memory and a processor, the memory having stored therein a computer program for executing the computer program to implement the method of any of claims 1-11.
25. A storage medium storing a computer program capable of being loaded by a processor and executing the method according to any one of claims 1-11.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211068021.8A CN117709413A (en) | 2022-09-02 | 2022-09-02 | Port model object calling method, system, platform, intelligent device and storage medium |
PCT/CN2023/116450 WO2024046462A1 (en) | 2022-09-02 | 2023-09-01 | Port model object calling method and system, platform, intelligent device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211068021.8A CN117709413A (en) | 2022-09-02 | 2022-09-02 | Port model object calling method, system, platform, intelligent device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117709413A true CN117709413A (en) | 2024-03-15 |
Family
ID=90100449
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211068021.8A Pending CN117709413A (en) | 2022-09-02 | 2022-09-02 | Port model object calling method, system, platform, intelligent device and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN117709413A (en) |
WO (1) | WO2024046462A1 (en) |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8990130B2 (en) * | 2012-11-21 | 2015-03-24 | International Business Machines Corporation | Consolidating multiple neurosynaptic cores into one memory |
CN106156845A (en) * | 2015-03-23 | 2016-11-23 | 日本电气株式会社 | A kind of method and apparatus for building neutral net |
CN108182473A (en) * | 2017-12-12 | 2018-06-19 | 中国科学院自动化研究所 | Full-dimension distributed full brain modeling system based on class brain impulsive neural networks |
CN113688981A (en) * | 2020-05-19 | 2021-11-23 | 深圳忆海原识科技有限公司 | Brain-like neural network with memory and information abstraction function |
CN112070213A (en) * | 2020-08-28 | 2020-12-11 | Oppo广东移动通信有限公司 | Neural network model optimization method, device, equipment and storage medium |
CN114548384A (en) * | 2022-04-28 | 2022-05-27 | 之江实验室 | Method and device for constructing impulse neural network model with abstract resource constraint |
-
2022
- 2022-09-02 CN CN202211068021.8A patent/CN117709413A/en active Pending
-
2023
- 2023-09-01 WO PCT/CN2023/116450 patent/WO2024046462A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
WO2024046462A1 (en) | 2024-03-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Arik | Global asymptotic stability analysis of bidirectional associative memory neural networks with time delays | |
US9542643B2 (en) | Efficient hardware implementation of spiking networks | |
CN105637541B (en) | Shared memory architecture for neural simulator | |
CN109684224B (en) | Method and device for testing conversion process of python code and building block | |
CN111752571A (en) | Program upgrading method, device, equipment and storage medium | |
US10552734B2 (en) | Dynamic spatial target selection | |
EP3111378A2 (en) | Method and apparatus for efficient implementation of common neuron models | |
CN110891091B (en) | Block chain-based equipment control method and device and server | |
Chen et al. | Adaptive quasi-synchronization control of heterogeneous fractional-order coupled neural networks with reaction-diffusion | |
CN115686527A (en) | Compiling method and device based on operator, computer equipment and storage medium | |
EP3502974A1 (en) | Method for realizing a neural network | |
US20150278683A1 (en) | Plastic synapse management | |
CN117709413A (en) | Port model object calling method, system, platform, intelligent device and storage medium | |
US11120513B2 (en) | Capital chain information traceability method, system, server and readable storage medium | |
US20230118943A1 (en) | Neuromorphic system and operating method thereof | |
CN113591280B (en) | Modelica model calculation method, device, equipment and storage medium | |
Stetsenko et al. | Petri-object simulation: technique and software | |
CN106815638B (en) | Input weight expanded neuron information processing method and system | |
CN110532533B (en) | Form precision collocation method, device, computer equipment and storage medium | |
US20140365413A1 (en) | Efficient implementation of neural population diversity in neural system | |
CN117688969A (en) | Port model object organization method, system, platform, intelligent device and medium | |
KR102473941B1 (en) | Device and method for parallel processing of deep learning model | |
US20230206048A1 (en) | Crossbar-based neuromorphic computing apparatus capable of processing large input neurons and method using the same | |
CN108764464A (en) | Neuronal messages sending method, device and storage medium | |
Santos-García et al. | Rewriting logic using strategies for neural networks: An implementation in Maude |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |