CN111552563A - Multithreading data architecture, multithreading message transmission method and system - Google Patents

Multithreading data architecture, multithreading message transmission method and system Download PDF

Info

Publication number
CN111552563A
CN111552563A CN202010311396.7A CN202010311396A CN111552563A CN 111552563 A CN111552563 A CN 111552563A CN 202010311396 A CN202010311396 A CN 202010311396A CN 111552563 A CN111552563 A CN 111552563A
Authority
CN
China
Prior art keywords
neuron
thread
logic
downstream
logical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010311396.7A
Other languages
Chinese (zh)
Other versions
CN111552563B (en
Inventor
林嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Jiayan Technology Co ltd
Original Assignee
Nanchang Jiayan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Jiayan Technology Co ltd filed Critical Nanchang Jiayan Technology Co ltd
Priority to CN202010311396.7A priority Critical patent/CN111552563B/en
Publication of CN111552563A publication Critical patent/CN111552563A/en
Application granted granted Critical
Publication of CN111552563B publication Critical patent/CN111552563B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention belongs to the technical field of multithreading development, and particularly relates to a multithreading data architecture, a multithreading message transmission method and a system, wherein the architecture comprises logic variables, logic nodes, a node list, neurons, a neuron center and neuron threads; the neuron center centrally stores neurons, loads a logic node list, instantiates all neuron objects according to the content of the logic node list, and constructs a logic network according to the logic relationship recorded by the logic node list; the neuron center associates the registered neuron threads with corresponding neuron objects; after the neuron object senses the change of the input state, the related neuron thread object is triggered to carry out required operation, and then the multithreading operation of the whole logic network is carried out. The invention carries out thread decomposition on the program operation content according to the data flow state change and the causal deduction relationship among the data, constructs a multi-thread data architecture and a message transmission mechanism, and improves the efficiency of multi-thread operation.

Description

Multithreading data architecture, multithreading message transmission method and system
Technical Field
The invention belongs to the technical field of multithreading development, and particularly relates to a multithreading data architecture, a multithreading message transmission method and a multithreading message transmission system.
Background
In a computer program, a multithread program has the advantage of effectively utilizing the operation resources of a CPU, and all multithread software needs to face the problem of cross-thread message passing. When the computer program transfers data among different threads, the lowest layer is a practical memory sharing method, i.e. a space is opened up in the memory, and different threads are allowed to have access rights of the space together. In actual codes, in order to reduce the copy amount of data, when information is transmitted across threads, complete data is often not transmitted, but only the memory address where the data is located. Due to the short data size, these pieces of information are therefore often referred to as messages. Delivery is generally referred to as cross-thread messaging, and the information being delivered is generally referred to as a message.
Cross-thread messaging is faced with problems of data synchronization, resource interlocking and the like, is an operation with certain risk for software, and requires special care of programmers in the development process. Synchronization means that accesses to the same resource by different threads must be separated in time, one thread cannot be operated, the other thread intervenes in, and otherwise, data errors are caused. In order to coordinate access to the same block of resources by different threads, many programming languages provide the concept of locks. A lock is a flag bit that identifies whether a block of resources is in use. For the resource in use, another thread must wait for it to be used before waiting for unlocking to release the right to use the resource. The use of the tag lock may cause a plurality of resources required by a plurality of threads to be locked with each other, thereby causing program deadlock.
Cross-thread messaging has a great risk and can easily cause a program to crash once it is poorly grasped. Developing multithreaded programs thus presents significant uncertainty and engineering risks to the software development team. Although multithreading can better exploit CPU computing resources, developing multithreaded programs is still not the mainstream of the software development community today. The use of multiple threads in software development is limited to the case that a large operation task can be decomposed into several homogeneous subtasks with similar operation processes. For example, when performing matrix operation, different elements of the matrix are simultaneously calculated through multiple threads, and for example, when rendering a picture, different parts of the picture are simultaneously rendered through parallel calculation. Such multi-threaded applications, which only serve to speed up the execution of certain specific tasks, cannot employ a multi-threaded software architecture from the overall design level. At present, no software design method capable of implementing a multithreading program on a software architecture level exists, and a standardized cross-thread data transmission and communication mechanism is lacked.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a multithreading data architecture, a multithreading message transmission method and a multithreading message transmission system, wherein a neuron working mode is used as a bionic basis, and the program operation content is subjected to thread decomposition according to the data flow state change and the cause-effect deduction relationship among data, so that a multithreading data architecture and a message transmission mechanism are constructed, and the multithreading operation efficiency is improved.
In a first aspect, the present invention provides a multithreaded data architecture comprising logical variables, logical nodes, a node list, neurons, a neuron hub, and neuron threads;
the logic variables are used for representing all logic variables related in a service range, and each logic variable has a corresponding variable name;
the logic nodes are used for representing causal relations among different logic variables, and each logic node comprises a logic variable representing a node and all upstream logic variables influencing the states of the logic variables;
the node list is used for storing each instantiated logical node object according to the logical relation record;
the neurons are used for expressing logical reasoning behaviors among the logical variables in the logical nodes, each neuron comprises a logical variable representing a corresponding node, and a logical reasoning algorithm for realizing a causal relationship between the logical variable and an upstream logical variable;
the neuron centers are used for storing neurons in a centralized mode, the instantiated neuron centers are loaded into a node list when a program starts to run, all neuron objects are instantiated according to the contents of the node list, and a logic network is constructed according to the logic relation recorded by the node list;
each instantiated neuron thread object is registered to a neuron center by the neuron thread, and the neuron center associates the registered neuron thread with the corresponding neuron object; after the neuron object senses the change of the input state, the related neuron thread object is triggered to carry out required operation, and then the multithreading operation of the whole logic network is carried out.
Preferably, the neuron thread is a subclass of the Bot class, and inherits the properties and methods of the Bot class.
In a second aspect, the present invention provides a multithreading message passing method, based on the first aspect, the multithreading data architecture is characterized by including the following steps:
when a working thread of a neuron thread object generates a message, the neuron thread object transmits message data to an associated neuron object through a Fireup function;
when the neuron object senses that the logic state value changes according to the message data, the neuron object calls a fire function to push the logic state value to a downstream neuron object;
when the downstream neuron object senses the change of the input logic state value, the downstream neuron object wakes up the working thread of the associated downstream neuron thread object, receives the input message data by using an input function, and processes the message data in the thread of the downstream neuron object, so that the message transmission from one thread to another working thread is completed.
Preferably, the multithreading message passing method further includes the following steps:
each neuron thread object is initialized and associated with a corresponding neuron object.
Preferably, the initializing each neuron thread object and associating each neuron thread object with a corresponding neuron object specifically is:
instantiating neuron centers and neuron thread objects after the program is started;
loading a node list into a neuron center, instantiating all neuron objects according to the content of the node list, and constructing a logic network according to the logic relation recorded by the node list;
the neuron thread object sends registration information to the neuron hub, which associates the registered neuron thread object with a corresponding neuron object.
Preferably, the neuron object calls a fire function to push the logic state value to a downstream neuron object, specifically:
the neuron object queries a subscription table, and the subscription table stores neuron objects having downstream logical relations with the neuron objects;
obtaining a downstream neuron object according to a subscription table;
and the neuron object calls a fire function to push the logic state value to a downstream neuron object.
In a third aspect, the present invention provides a multithreading message passing system, based on the first aspect, the multithreading data architecture includes a message generating unit, a logic pushing unit, and a thread crossing unit:
the message generating unit is used for transmitting message data to the related neuron object through a Fireup function when a working thread of the neuron thread object generates a message;
the logic pushing unit is used for the neuron object to call a fire function to push the logic state value to a downstream neuron object when the neuron object senses that the logic state value changes according to the message data;
and the thread cross-transmission unit is used for awakening the working thread of the associated downstream neuron thread object when the downstream neuron object senses the change of the input logic state value, receiving the input message data by the downstream neuron thread object by using an input function, and processing the message data in the thread of the downstream neuron thread object, so that the message transmission from one thread to another working thread is completed.
Preferably, the multithreading message passing method further includes an initialization unit;
the initialization unit is used for initializing each neuron thread object and associating each neuron thread object with a corresponding neuron object.
Preferably, the initializing each neuron thread object and associating each neuron thread object with a corresponding neuron object specifically is:
instantiating neuron centers and neuron thread objects after the program is started;
loading a node list into a neuron center, instantiating all neuron objects according to the content of the node list, and constructing a logic network according to the logic relation recorded by the node list;
the neuron thread object sends registration information to the neuron hub, which associates the registered neuron thread object with a corresponding neuron object.
Preferably, the neuron object calls a fire function to push the logic state value to a downstream neuron object, specifically:
the neuron object queries a subscription table, and the subscription table stores neuron objects having downstream logical relations with the neuron objects;
obtaining a downstream neuron object according to a subscription table;
and the neuron object calls a fire function to push the logic state value to a downstream neuron object.
According to the technical scheme, the neuron working mode is used as a bionic basis, and the program operation content is subjected to thread decomposition according to the data flow state change and the cause-and-effect deduction relationship among data, so that a multi-thread data architecture and a message transmission mechanism are constructed, and the efficiency of multi-thread operation is improved.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
FIG. 1 is a block diagram of a multi-threaded data architecture according to the present embodiment;
FIG. 2 is a flow chart of a multi-threaded message passing method in the present embodiment;
FIG. 3 is a task flow diagram of the initialization of the neuron center in the present embodiment;
FIG. 4 is a schematic diagram illustrating interaction in message passing between threads according to the present embodiment;
FIG. 5 is a block diagram of a specific example of the present embodiment;
fig. 6 is a schematic structural diagram of a multi-thread message passing system in this embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
The human brain is a typical multithreaded processor, and each brain neuron has its own independent life cycle, i.e., works in its independent thread of work. The principle of brain work today there is no unified theory in the scientific community, but we can do some credible simplifications to the basic working pattern of brain cells based on general brain anatomy and microscopy. The most basic unit of work in the brain is the brain cell, and the brain cell having the function of transmitting and receiving nerve impulses is called a brain neuron. The neurons are shaped like awn stars, and the extending nerve fibers of the neurons form synapses with nerve fibers of other neurons for conducting nerve electric impulses. Each neuron has a plurality of nerve fibers, called dendrites, for receiving nerve impulses and a single nerve fiber, called axons, for transmitting nerve impulses. The neuron senses the electrical change of peripheral nerves by utilizing the dendrites during working, the sensed electrical change is gathered to the nerve base part through the dendrites, the nerve base part selects whether to change the excitation state of the nerve base part or not after comprehensively judging peripheral signals, and if the excitation condition is met, the nerve base part sends out electrical impulse to the outside through axons.
The working process of the cerebral neurons can be summarized as the following characteristics:
1) each neuron represents an independent logic variable, different neurons represent different logic variables, and the firing state of each neuron characterizes the state of the logic variable.
2) The neuron has two types of input and output ports, and a plurality of neurons are connected into a logic network with a hierarchical relationship by using the respective input and output ports. In the following, the logic variable represented by the neuron connected to the output side of the neurosynaptic is referred to as an upstream logic variable, and the other side is referred to as a downstream logic variable.
3) Each neuron comprehensively calculates the state of the logic variable represented by the neuron by observing the real-time state of all the upstream logic variables of the neuron and then issues the logic variable.
The logical reasoning working mechanism of the cranial neural network under the condition of multi-thread message transmission is described above. Following this biological mechanism, the present embodiment provides a multithreaded data architecture, a multithreaded message passing method and a system.
The first embodiment is as follows:
the present embodiment provides a multithreaded data architecture, as shown in fig. 1, including a logic variable Concept, a logic node Notion, a node list Notionlist, a Neuron, a Neuron center, and a Neuron thread agent;
the logic variables Concept are used for representing all logic variables related in the service range, and each Concept has a corresponding variable name. Concept is an enumerated variable whose variable name should be a piece of text with a substantial meaning, and thus the variable name of Concept also characterizes the actual meaning of this logical variable as a linguistic tag. In actual programming, a set of strings with the same name as the linguistic tag for each value in Concept is often required for debugging purposes.
The logic nodes Notions are used for representing the causal relationship among different Concepts, and each Notion comprises a Concept representing a node and all upstream Concepts influencing the state of the Concepts. The substantive information stored in the Notion is a causal relationship between different logical variables in which the state of an upstream Concept changes to affect a downstream Concept.
And the node list Notionlist is used for storing each instantiated Notion object according to the logical relationship record. And connecting the causal relationships stored in the notes back and forth to obtain a complete logic network of the business logic to be processed by the program. The notilist is a space for collectively storing each instantiated Notion object, and by using the notilist, a developer can completely examine the business logic of the whole program.
The Neuron is used for representing logical reasoning behaviors among the concepts in the Notion, each Neuron comprises the Concept representing the corresponding node and a logical reasoning algorithm for realizing causal relationship between the Concept and the upstream Concept, the Neuron is the most basic logical reasoning unit and is added with the logical reasoning behaviors on the basis of the Notion, data in the Notion stores other logical variables influenced by the states of the logical variables, and the algorithm contained in the Neuron realizes the transmission behaviors of the logical state values.
The Neuron center is used for storing Neuron in a centralized mode, the instantiated Neuron center is loaded into the notilist when the program starts to run, all Neuron objects are instantiated according to the content of the notilist, and then a logic network is constructed according to the logic relation recorded by the notilist.
The Neuron thread Neuron objects register each instantiated Neuron object with a Neuron center, and the Neuron center associates the registered Neuron objects with corresponding Neuron objects; and after the Neuron object senses that the input state is changed, triggering the associated Neuron object to perform required operation, and further performing multi-thread operation of the whole logic network. The Neuron object is a special computing unit matched with the Neuron object, the computing processes are long and large in code amount, and therefore the Neuron object needs to be provided with an independent working thread and complete the work in the independent thread. The neuron thread neuron in this embodiment is a subclass of the Bot class, such as the Bot class described in patent 201910281108.5, a multithreading program architecture method and architecture system, and the neuron agent in this embodiment inherits the attributes and methods of the Bot class.
In summary, the present embodiment takes the neuron operation mode as the bionic basis, and performs thread decomposition on the program operation content according to the data flow state change and the causal deduction relationship between each data, so as to construct the multi-thread data architecture of the present embodiment, and when the multi-thread data architecture of the present embodiment performs multi-thread operation, the efficiency of the multi-thread operation is improved.
Example two:
the embodiment provides a multithreading message passing method, based on the multithreading data architecture described in the first embodiment, as shown in fig. 2, including the following steps:
s1, initializing each NeuroAgent object and associating each NeuroAgent object with a corresponding Neuro object;
s2, when a working thread of a NeuroAgent object generates a message, the NeuroAgent object transmits message data to the associated Neuro object through a Fireup function;
s3, when the Neuron object senses that the logic state value changes according to the message data, the Neuron object calls a fire function to push the logic state value to a downstream Neuron object;
s4, when the downstream Neuron object senses the change of the input logic state value, the downstream Neuron object wakes up the working thread of the associated downstream Neuron object, the downstream Neuron object receives the input message data by using the input function and processes the message data in the thread of the downstream Neuron object, thereby completing the message transmission from one thread to another working thread.
In step S1 of this embodiment, the initializing each Neuron object and associating each Neuron object with a corresponding Neuron object specifically includes:
after the program is started, the neuron and the neuron object are instantiated (the neuron object is instantiated at the first time after the program is started);
the Neuron is loaded with the Notionlist, all Neuron objects are instantiated according to the content of the Notionlist (namely the Neuron objects are instantiated according to the instantiated Concept objects in the Notionlist), and a logic network is constructed according to the logic relationship recorded by the Notionlist;
the neuroagent object sends registration information to the neurocenter, which associates the registered neuroagent object with the corresponding Neuron object.
As shown in fig. 3, which is a task flow diagram of the neurone center initialization in this embodiment, after initialization and object association are performed, after an original message is generated in a worker thread of a certain neurone agent object, message delivery is started. The Neuron object functionally passes the message data to the Neuron object associated therewith. From the logical network perspective, this is equivalent to the logical state value of the logical variable associated with the Neuron having changed.
When the Neuron object senses that the logic state value changes according to the message data, the Neuron object queries a subscription table. The subscription table stores the Neuron objects having downstream logical relations with the Neuron objects, and also stores the Neuron objects having upstream logical relations with the Neuron objects, so that the logical relations between the Neuron objects can be known through the subscription table. And obtaining the downstream Neuron object according to the subscription table. The Neuron object calls a fire function to push the logic state value to a downstream Neuron object. This process is equivalent to the neuron exciting a nerve electrical impulse.
When the downstream Neuron object senses the input state change, the working thread of the Neuron object associated with the downstream Neuron object is awakened through the thread lockagentAlarm field. The neuron object receives input data by using an input function thereof and processes the data in a thread of the neuron object. As shown in fig. 4, which is an interaction diagram of the inter-thread message passing in this embodiment, so far, the message passes from one thread to another thread, and similarly, the message passing between the threads in the logical network can be performed in the above manner, so as to implement the multi-thread operation of the entire logical network.
A specific program is taken as an example, as shown in fig. 5, which is a module deployment diagram of this case, and the multithreading data architecture and the message passing method described in this embodiment are described in more detail below. Supposing that a program needs to support an umbrella by passers-by and gather water on the ground to guess whether the current rain is rainy, the judgment logic is as follows: the passerby opens the umbrella and the ground is wet, then it is raining, otherwise it is not raining. There are three logical variables in this program, which are: umbrella opening, wet ground and rain. The logic variable Concept is defined as follows:
Figure BDA0002457972100000101
the causal relationships of these three logical variables are defined in the NotionList as follows:
static vector<Notion>NotionList{
Notion{UMBRELLA},
Notion{GROUND_WET},
Notion{RAINING,{UMBRELLA,GROUND_WET}}
};
the program runtime loads a notilist for the Neuron center, which will instantiate three Neuron objects accordingly and build a logical net according to its defined causal relationships, as follows:
Neuron(UMBRELLA);
Neuron(GROUND_WET);
Neuron(RAINING);
after instantiating three Neuron objects, the program needs to create three Neuron objects:
<NeuronAgent>UmbrellaMonitor umbrellaMonitor;
<NeuronAgent>GroundMonitor groundMornitor;
<NeuronAgent>RainingAnalyzer rainingAnalyzer;
the three objects are respectively associated with the three logic variables and have independent working threads. The umbrella mornitor is responsible for monitoring the umbrella opening of passers and setting the logic state value of Neuron (UMBRELLA), and the group mornitor is responsible for monitoring the surface water accumulation and setting the logic state value of Neuron (GROUND _ WET). The logic state values of the two logic variables are transmitted as input signals to neuron (RAINING) and then to raining Analyzer, and logic analysis is completed in the working thread of the raining Analyzer object, and finally the raining Analyzer is called, wherein Fireup () sets the logic state value of neuron (RAING) and neuron (RAING); the logic state value may be derived whether it is currently raining.
If the logical state value of Neuron (UMBRELLA) is 1, the logical state value of Neuron (GROUND _ WET) is 1, and Neuron (RAINING) obtains the logical state value of 1 according to the AND algorithm, then it indicates that it is raining currently. If one of the logical state value of Neuron (UMBRELLA) and Neuron (GROUND _ WET) is 0, then a logical state value of 0 is obtained according to the AND algorithm according to Neuron (RAINING), indicating that it is not raining currently.
In summary, the present embodiment takes the neuron working mode as the bionic basis, and performs thread decomposition on the program operation content according to the data flow state change and the cause-and-effect deduction relationship between the data, so as to construct the multi-thread data architecture and the multi-thread message passing method of the present embodiment, and when performing multi-thread operation by using the architecture and the message passing method of the present embodiment, the efficiency of the multi-thread operation is improved.
Example three:
the embodiment provides a multithreading message passing system, which is based on the multithreading data architecture described in the first embodiment, and as shown in fig. 6, the multithreading data architecture includes an initialization unit, a message generation unit, a logic pushing unit, and a thread crossing unit:
the initialization unit is used for initializing each NeuroAgent object and associating each NeuroAgent object with the corresponding Neuro object;
the message generating unit is used for transmitting message data to the associated Neuron object through a Fireup function when a working thread of the Neuron object generates a message;
the logic pushing unit is used for the Neuron object to call a fire function to push the logic state value to a downstream Neuron object when the Neuron object senses that the logic state value changes according to the message data;
and the thread cross-transmission unit is used for awakening the working thread of the associated downstream Neuron agent object when the downstream Neuron object senses the change of the input logic state value, receiving the input message data by the downstream Neuron agent object through an input function, and processing the message data in the thread of the downstream Neuron agent object, so that the message transmission from one thread to another working thread is completed.
The initializing each Neuron object and associating each Neuron object with a corresponding Neuron object specifically include:
after the program is started, the neuron and the neuron object are instantiated;
the Neuron loads the Notionlist, instantiates all Neuron objects according to the content of the Notionlist, and constructs a logic network according to the logic relationship recorded by the Notionlist;
the neuroagent object sends registration information to the neurocenter, which associates the registered neuroagent object with the corresponding Neuron object.
When the Neuron object senses that the logic state value changes according to the message data, the Neuron object queries a subscription table. The subscription table stores the Neuron objects having downstream logical relations with the Neuron objects, and also stores the Neuron objects having upstream logical relations with the Neuron objects, so that the logical relations between the Neuron objects can be known through the subscription table. And obtaining the downstream Neuron object according to the subscription table. The Neuron object calls a fire function to push the logic state value to a downstream Neuron object. This process is equivalent to the neuron exciting a nerve electrical impulse.
When the downstream Neuron object senses the input state change, the working thread of the Neuron object associated with the downstream Neuron object is awakened through the thread lockagentAlarm field. The neuron object receives input data by using an input function thereof and processes the data in a thread of the neuron object. The message is transmitted from one thread to another thread, and similarly, the message can be transmitted among all the threads in the logic network in the manner, so that the multithreading operation of the whole logic network is realized.
The following describes the multi-threaded data architecture and message passing system in more detail by taking a specific program as an example. Supposing that a program needs to support an umbrella by passers-by and gather water on the ground to guess whether the current rain is rainy, the judgment logic is as follows: the passerby opens the umbrella and the ground is wet, then it is raining, otherwise it is not raining. There are three logical variables in this program, which are: umbrella opening, wet ground and rain. The logic variable Concept is defined as follows:
Figure BDA0002457972100000131
the causal relationships of these three logical variables are defined in the NotionList as follows:
static vector<Notion>NotionList{
Notion{UMBRELLA},
Notion{GROUND_WET},
Notion{RAINING,{UMBRELLA,GROUND_WET}}
};
the program runtime loads a notilist for the Neuron center, which will instantiate three Neuron objects accordingly and build a logical net according to its defined causal relationships, as follows:
Neuron(UMBRELLA);
Neuron(GROUND_WET);
Neuron(RAINING);
after instantiating three Neuron objects, the program needs to create three Neuron objects:
<NeuronAgent>UmbrellaMonitor umbrellaMonitor;
<NeuronAgent>GroundMonitor groundMornitor;
<NeuronAgent>RainingAnalyzer rainingAnalyzer;
the three objects are respectively associated with the three logic variables and have independent working threads. The umbrella mornitor is responsible for monitoring the umbrella opening of passers and setting the logic state value of Neuron (UMBRELLA), and the group mornitor is responsible for monitoring the surface water accumulation and setting the logic state value of Neuron (GROUND _ WET). The logic state values of the two logic variables are transmitted as input signals to neuron (RAINING) and then to raining Analyzer, and logic analysis is completed in the working thread of the raining Analyzer object, and finally the raining Analyzer is called, wherein Fireup () sets the logic state value of neuron (RAING) and neuron (RAING); the logic state value may be derived whether it is currently raining.
If the logical state value of Neuron (UMBRELLA) is 1, the logical state value of Neuron (GROUND _ WET) is 1, and Neuron (RAINING) obtains the logical state value of 1 according to the AND algorithm, then it indicates that it is raining currently. If one of the logical state value of Neuron (UMBRELLA) and Neuron (GROUND _ WET) is 0, then a logical state value of 0 is obtained according to the AND algorithm according to Neuron (RAINING), indicating that it is not raining currently.
In summary, the present embodiment takes the neuron operating mode as a bionic basis, and performs thread decomposition on program operation contents according to the data flow state change and the cause-and-effect deduction relationship between data, so as to construct the multi-thread data architecture and the multi-thread message passing system of the present embodiment, and when performing multi-thread operation through the architecture and the message passing system of the present embodiment, the efficiency of the multi-thread operation is improved.
Those of ordinary skill in the art will appreciate that the various illustrative steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and have been described generally in terms of their functionality in the foregoing description for clarity of interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present application, it should be understood that the division of the steps and units is only one logical function division, and there may be other division manners in actual implementation, for example, multiple steps may be combined into one step, one step may be split into multiple steps, or some features may be omitted.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (10)

1. A multi-threaded data architecture comprising logical variables, logical nodes, a node list, neurons, neuron hubs and neuron threads;
the logic variables are used for representing all logic variables related in a service range, and each logic variable has a corresponding variable name;
the logic nodes are used for representing causal relations among different logic variables, and each logic node comprises a logic variable representing a node and all upstream logic variables influencing the states of the logic variables;
the node list is used for storing each instantiated logical node object according to the logical relation record;
the neurons are used for expressing logical reasoning behaviors among the logical variables in the logical nodes, each neuron comprises a logical variable representing a corresponding node, and a logical reasoning algorithm for realizing a causal relationship between the logical variable and an upstream logical variable;
the neuron centers are used for storing neurons in a centralized mode, the instantiated neuron centers are loaded into a node list when a program starts to run, all neuron objects are instantiated according to the contents of the node list, and a logic network is constructed according to the logic relation recorded by the node list;
each instantiated neuron thread object is registered to a neuron center by the neuron thread, and the neuron center associates the registered neuron thread with the corresponding neuron object; after the neuron object senses the change of the input state, the related neuron thread object is triggered to carry out required operation, and then the multithreading operation of the whole logic network is carried out.
2. The multithreaded data architecture of claim 1 wherein the neuron thread is a subclass of the Bot class, inheriting attributes and methods of the Bot class.
3. A multi-threaded message passing method, based on the multi-threaded data architecture of claim 1 or 2, comprising the steps of:
when a working thread of a neuron thread object generates a message, the neuron thread object transmits message data to an associated neuron object through a Fireup function;
when the neuron object senses that the logic state value changes according to the message data, the neuron object calls a fire function to push the logic state value to a downstream neuron object;
when the downstream neuron object senses the change of the input logic state value, the downstream neuron object wakes up the working thread of the associated downstream neuron thread object, receives the input message data by using an input function, and processes the message data in the thread of the downstream neuron object, so that the message transmission from one thread to another working thread is completed.
4. A method of multi-threaded message passing as in claim 3, further comprising the steps of:
each neuron thread object is initialized and associated with a corresponding neuron object.
5. A multi-threaded message passing method according to claim 3, wherein initializing each neuron thread object and associating each neuron thread object with a corresponding neuron object is:
instantiating neuron centers and neuron thread objects after the program is started;
loading a node list into a neuron center, instantiating all neuron objects according to the content of the node list, and constructing a logic network according to the logic relation recorded by the node list;
the neuron thread object sends registration information to the neuron hub, which associates the registered neuron thread object with a corresponding neuron object.
6. A multi-threaded message passing method according to claim 3, wherein the neuron object calls a fire function to push a logic state value to a downstream neuron object, specifically:
the neuron object queries a subscription table, and the subscription table stores neuron objects having downstream logical relations with the neuron objects;
obtaining a downstream neuron object according to a subscription table;
and the neuron object calls a fire function to push the logic state value to a downstream neuron object.
7. A multi-threaded message passing system, based on the multi-threaded data architecture of claim 1 or 2, comprising a message generation unit, a logical push unit and a thread crossing unit:
the message generating unit is used for transmitting message data to the related neuron object through a Fireup function when a working thread of the neuron thread object generates a message;
the logic pushing unit is used for the neuron object to call a fire function to push the logic state value to a downstream neuron object when the neuron object senses that the logic state value changes according to the message data;
and the thread cross-transmission unit is used for awakening the working thread of the associated downstream neuron thread object when the downstream neuron object senses the change of the input logic state value, receiving the input message data by the downstream neuron thread object by using an input function, and processing the message data in the thread of the downstream neuron thread object, so that the message transmission from one thread to another working thread is completed.
8. A multi-threaded messaging system as in claim 7, further comprising an initialization unit;
the initialization unit is used for initializing each neuron thread object and associating each neuron thread object with a corresponding neuron object.
9. A multi-threaded messaging system as in claim 7, wherein initializing each neuron thread object and associating each neuron thread object with a corresponding neuron object is by:
instantiating neuron centers and neuron thread objects after the program is started;
loading a node list into a neuron center, instantiating all neuron objects according to the content of the node list, and constructing a logic network according to the logic relation recorded by the node list;
the neuron thread object sends registration information to the neuron hub, which associates the registered neuron thread object with a corresponding neuron object.
10. A multi-threaded message passing system according to claim 7, wherein the neuron object invokes a fire function to push logic state values to downstream neuron objects, specifically:
the neuron object queries a subscription table, and the subscription table stores neuron objects having downstream logical relations with the neuron objects;
obtaining a downstream neuron object according to a subscription table;
and the neuron object calls a fire function to push the logic state value to a downstream neuron object.
CN202010311396.7A 2020-04-20 2020-04-20 Multithreading data system, multithreading message transmission method and system Active CN111552563B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010311396.7A CN111552563B (en) 2020-04-20 2020-04-20 Multithreading data system, multithreading message transmission method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010311396.7A CN111552563B (en) 2020-04-20 2020-04-20 Multithreading data system, multithreading message transmission method and system

Publications (2)

Publication Number Publication Date
CN111552563A true CN111552563A (en) 2020-08-18
CN111552563B CN111552563B (en) 2023-04-07

Family

ID=71998528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010311396.7A Active CN111552563B (en) 2020-04-20 2020-04-20 Multithreading data system, multithreading message transmission method and system

Country Status (1)

Country Link
CN (1) CN111552563B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115866035A (en) * 2022-11-30 2023-03-28 广东舜势测控设备有限公司 Multithreading data high-speed pushing method, system, controller and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102594927A (en) * 2012-04-05 2012-07-18 高汉中 Neural-network-based cloud server structure
CN103488662A (en) * 2013-04-01 2014-01-01 哈尔滨工业大学深圳研究生院 Clustering method and system of parallelized self-organizing mapping neural network based on graphic processing unit
CN105550749A (en) * 2015-12-09 2016-05-04 四川长虹电器股份有限公司 Method for constructing convolution neural network in novel network topological structure
CN105574585A (en) * 2015-12-14 2016-05-11 四川长虹电器股份有限公司 Sample training method of neural network model on the basis of multithreading mutual exclusion
CN106648816A (en) * 2016-12-09 2017-05-10 武汉斗鱼网络科技有限公司 Multithread processing system and multithread processing method
CN108763360A (en) * 2018-05-16 2018-11-06 北京旋极信息技术股份有限公司 A kind of sorting technique and device, computer readable storage medium
CN110135575A (en) * 2017-12-29 2019-08-16 英特尔公司 Communication optimization for distributed machines study
CN110175071A (en) * 2019-04-09 2019-08-27 南昌嘉研科技有限公司 A kind of multithread programs framework method and architecture system
CN110309913A (en) * 2018-03-27 2019-10-08 英特尔公司 Neuromorphic accelerator multitasking
CN110322010A (en) * 2019-07-02 2019-10-11 深圳忆海原识科技有限公司 The impulsive neural networks arithmetic system and method calculated for class brain intelligence with cognition
WO2019218896A1 (en) * 2018-05-18 2019-11-21 上海寒武纪信息科技有限公司 Computing method and related product
CN110991626A (en) * 2019-06-28 2020-04-10 广东工业大学 Multi-CPU brain simulation system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102594927A (en) * 2012-04-05 2012-07-18 高汉中 Neural-network-based cloud server structure
CN103488662A (en) * 2013-04-01 2014-01-01 哈尔滨工业大学深圳研究生院 Clustering method and system of parallelized self-organizing mapping neural network based on graphic processing unit
CN105550749A (en) * 2015-12-09 2016-05-04 四川长虹电器股份有限公司 Method for constructing convolution neural network in novel network topological structure
CN105574585A (en) * 2015-12-14 2016-05-11 四川长虹电器股份有限公司 Sample training method of neural network model on the basis of multithreading mutual exclusion
CN106648816A (en) * 2016-12-09 2017-05-10 武汉斗鱼网络科技有限公司 Multithread processing system and multithread processing method
CN110135575A (en) * 2017-12-29 2019-08-16 英特尔公司 Communication optimization for distributed machines study
CN110309913A (en) * 2018-03-27 2019-10-08 英特尔公司 Neuromorphic accelerator multitasking
CN108763360A (en) * 2018-05-16 2018-11-06 北京旋极信息技术股份有限公司 A kind of sorting technique and device, computer readable storage medium
WO2019218896A1 (en) * 2018-05-18 2019-11-21 上海寒武纪信息科技有限公司 Computing method and related product
CN110175071A (en) * 2019-04-09 2019-08-27 南昌嘉研科技有限公司 A kind of multithread programs framework method and architecture system
CN110991626A (en) * 2019-06-28 2020-04-10 广东工业大学 Multi-CPU brain simulation system
CN110322010A (en) * 2019-07-02 2019-10-11 深圳忆海原识科技有限公司 The impulsive neural networks arithmetic system and method calculated for class brain intelligence with cognition

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ZHONGWEI LIN等: "An asynchronous GVT computing algorithm in neuron time warp-multi thread", 《2015 WINTER SIMULATION CONFERENCE (WSC)》 *
周媛等: "用于能耗数据分析的改进并行BP算法", 《计算机工程》 *
唐舸轩: "面向异构平台的深度学习并行优化算法研究与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
徐频捷等: "基于脉冲神经网络与移动GPU计算的图像分类算法研究与实现", 《计算机工程与科学》 *
梁君泽: "多CPU类脑模拟系统架构研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
王玉哲等: "仿生物型人工神经网络的探索与实现", 《计算机工程与设计》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115866035A (en) * 2022-11-30 2023-03-28 广东舜势测控设备有限公司 Multithreading data high-speed pushing method, system, controller and storage medium

Also Published As

Publication number Publication date
CN111552563B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
Agha An overview of actor languages
Shunkevich Ontology-based design of knowledge processing machines
Chandra et al. COOL: A language for parallel programming
CN111552563B (en) Multithreading data system, multithreading message transmission method and system
Carrión Kubernetes as a standard container orchestrator-a bibliometric analysis
Fonseca et al. Agent behavior architectures a MAS framework comparison
Carvalho Junior et al. Towards an architecture for component‐oriented parallel programming
CN109408212B (en) Task scheduling component construction method and device, storage medium and server
Warren A model for dynamic configuration which preserves application integrity
de Oliveira Dantas et al. A component-based framework for certification of components in a cloud of HPC services
Bardakoff et al. Hedgehog: understandable scheduler-free heterogeneous asynchronous multithreaded data-flow graphs
de Carvalho-Junior et al. The design of a CCA framework with distribution, parallelism, and recursive composition
Palacz et al. An agent-based workflow management system
Di Pierro et al. Continuous-time probabilistic KLAIM
Eddin Towards a Taxonomy of Dynamic Reconfiguration Approaches.
Diaconescu A framework for using component redundancy for self-adapting and self-optimising component-based enterprise systems
Zuo High level support for distributed computation in weka
Setzkorn et al. JavaSpaces–an affordable technology for the simple implementation of reusable parallel evolutionary algorithms
US20040049655A1 (en) Method and apparatus for communication to threads of control through streams
Bader et al. Testing concurrency and communication in distributed objects
Tian et al. A Multiprocessing Framework for Heterogeneous Biomedical Embedded Systems with the Proposal of a Finite State Machine-Based Architecture.
Nowicki et al. Scalable computing in Java with PCJ Library. Improved collective operations
Tokoro et al. Concurrent programming in Orient84/K: an object-oriented knowledge representation language
Niranjanamurthy et al. Efficient file downloading and distribution using grid computing for intelligent applications
Pardo et al. Population based metaheuristics in Spark: Towards a general framework using PSO as a case study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant