CN103473032B - Independent driving member and driving member composition model and component method for splitting can be run - Google Patents

Independent driving member and driving member composition model and component method for splitting can be run Download PDF

Info

Publication number
CN103473032B
CN103473032B CN201310020477.1A CN201310020477A CN103473032B CN 103473032 B CN103473032 B CN 103473032B CN 201310020477 A CN201310020477 A CN 201310020477A CN 103473032 B CN103473032 B CN 103473032B
Authority
CN
China
Prior art keywords
message
driving member
layer
operator
bus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310020477.1A
Other languages
Chinese (zh)
Other versions
CN103473032A (en
Inventor
龙建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou shenku robot Co.,Ltd.
Original Assignee
龙建
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 龙建 filed Critical 龙建
Priority to CN201310020477.1A priority Critical patent/CN103473032B/en
Priority to PCT/CN2013/001370 priority patent/WO2014110701A1/en
Publication of CN103473032A publication Critical patent/CN103473032A/en
Application granted granted Critical
Publication of CN103473032B publication Critical patent/CN103473032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs

Abstract

The invention provides a kind of independent driving member and driving member composition model and component method for splitting can be run, independent driving member composition model is set P={ the 1st layer of driving member, the 2nd layer of driving member subclass ... n-th layer driving member subclass }, wherein, n >=2; Each driving member in n-th layer driving member subclass carries out component installaiton based on n-th layer virtual message bus, obtains the single driving member in (n-1)th layer of driving member subclass; By that analogy, until each driving member in the 2nd layer of driving member subclass carries out component installaiton based on the 2nd layer of virtual message bus, the 1st layer of driving member is obtained; Wherein, each driving member of each layer meets identical agreement.Embody rule environment can be departed from, complete independently member function.Make component can succinctly, expeditiously multiplexing, reconstruct, combination, make whole construction system possess height reusability.

Description

Independent driving member and driving member composition model and component method for splitting can be run
Technical field
The invention belongs to field of computer technology, be specifically related to a kind of independent driving member and driving member composition model and component method for splitting can be run.
Background technology
As everyone knows, the ultimate aim of Software for Design is: what real world is, what software just should be designed to, thus realizes the object by software simulation real world.Because real world is numerous and complicated ground, simulate real world often and be not easy faithfully.Forefathers find through practice for many years, and each detailed simulation of software systems to real world must be more true to nature, and software more easily designs, understand and maintenance.Because object based programming simulates the things of real world truly, easy understand, maintain easily, easily change, therefore, object based programming instead of procedure-oriented programming, becomes the programming mode of current main flow.
But, due to the restriction of the many factors such as hardware cost, in real world, " walk abreast " activity of ubiquitous, multiple objects action simultaneously, in single computer, but seldom can real simulation.It is movable that in modern computer software systems, the overwhelming majority all only presents " pseudo-parallel ": from macroscopically, and a computing machine can perform multiple task, multiple program simultaneously, and some objects are simultaneously in operation; But from microcosmic, in office in a flash, any instant, then only have a program operation.Due to processor speed quickly, it switches fast back and forth between several program, through slightly long a period of time, we just feel these programs be perform at the same time, simultaneously movable.This phenomenon, is referred to as " concurrent " usually, movable to distinguish proper " walking abreast ".
Generally in the middle low layer software such as operating system, corresponding concurrent technology realization mechanism is provided, and special concurrent services interface is externally provided, so that upper program can complete concurrent activities.Upper application program, then call these concurrent services interfaces, makes oneself to be rendered as one or more concurrent task.
Scheduling operation (for operating system, software bus etc.) between concurrent entity (task, process, thread, fine journey etc.), provides the realization mechanism of concurrent technology.In modern operating system, depriving formula scheduling is the scheduling strategy generally adopted.But it has some deadly defects, and examination is listed below:
(1) stack space problem: deprive the implementation that formula scheduling may interrupt concurrent entity at any time, therefore need Conservation and instauration concurrent entity running environment (minimum needs comprises order register etc.), this needs RAM stack space.Common operation occasion (as PC), this problem is not given prominence to.But when a large amount of concurrent entity (as single-chip microcomputer connects at thousands of network), problem will become quite outstanding; Under the special occasions (as WSN application) that RAM is rare, scheduling will become infeasible.
(2) execution efficiency problem: due to the concurrent entity running environment of needs Conservation and instauration, this part adjusts the execution of code to increase.In the situation (as TinyOS) of very lightweight scheduling, relative to the scheduling overall execution time, the execution time that it increases is considerable, has had a strong impact on the execution efficiency of lightweight scheduling.
(3) compete sharing problem: deprive the implementation that formula scheduling may interrupt concurrent entity at any time, therefore, the data that all concurrent inter-entity are shared and resource, all become by the object competed, become critical resource.If all these are competed object, all protect with critical section or other unified general measure, so the overall operation efficiency of system will be reduced to unacceptable degree.If well-designed shared structure; general measure is only adopted to protect partial objects; then when programming and safeguarding code; the slightly careless critical resource that will cause competes the timing failure (this kind of fault is also difficult to reappear and location especially) caused; require to improve much to the professional quality of programming personnel and maintainer; improve design and maintenance cost, reduce system reliability.Particularly for a large amount of irregular shared concurrent data (as individual different special thread up to a hundred), in put into pratise, general developer is forbidding, except non-specifically is necessary, all shies away.
(4) compete multiplying question: the aforementioned data sharing design be optimized in order to improved efficiency, code reuse sex chromosome mosaicism can be brought.Due to the competitive environment for project, have employed the shared data protection code eliminating competition pointedly, these codes generally do not possess general versatility.Even if for the project that other is closely similar, faced by also very possible is other different data contention condition, therefore, needs to make the data sharing design optimized in addition, can not direct multiplexing original module.
TinyOS is the Mach that University of California Berkeley (UCBerkeley) develops for wireless sensor network WSN (WirelessSensorNetwork).The two-layer scheduling mode of TinyOS is: task scheduling and hardware transactional are dispatched.Hardware transactional scheduling is activated by hardware interrupts, can seize common task, be mainly used in the quick real-time response of high priority.It duplicates substantially in general interrupt handling routine, and the place that there is any discrepancy is: it can send signal to task scheduling, activates common task; Meanwhile, the asynchronous ability of nesC key word async can also be utilized, directly call and enter into nesC construction system, the command processing function in component invoking, and send asynchronous event to component.
The basic task of TinyOS is printenv function.Task scheduling adopts first in first out (FIFO) algorithm of cooperating type, does not seize mutually between task, does not have dividing of priority.Once a task obtains processor, just run to end always.Being generally used for the application not high to time requirement, is a kind of Delay computing DPC (DeferredProcedureCall) mechanism in essence.TinyOS2.x scheduler can by customization and replacement.
As shown in Figure 1, TinyOS2.x core PCB is the byte arrays of a regular length, composition FIFO ready task queue and a wait task pond.Intrasystem each task represents by the task ID of a byte, is numbered 0 ~ 255, and wherein 255 represent idle task NO_TASK: namely task does not exist.Therefore, 255 effective tasks can be held at most in system.Actual task number in certain application system concrete, is also the physical length of byte arrays, then, during compilation of source code, is automatically generated by compiler.
What this byte arrays was deposited is task ready mark.If certain task ID does not receive event, do not need to join in FIFO ready queue, then deposit NO_TASK mark, enter wait task pond.If event occurs for this task ID, activate and enter ready state, then what deposit in this task ID byte is next ready task, represents that this ID has entered the queue of FIFO ready task, waits pending.
When activate a task ID walks abreast and joins the team, adopt the critical section Protection Code of block type.If this ID has been in ready state, then returns busy mark, otherwise joined ready queue from tail of the queue.Owing to only there being the ID of a byte to join the team, therefore critical section can be passed through at a high speed, not too affects Response time.The potential problems that this algorithm can avoid an ID to join the team for many times: if same ID can take multiple byte location, in some cases, may take byte arrays, cause that other task cannot be joined the team and system is seemingly-dead.
When ready task ID to go out group from head of the queue, the same critical section Protection Code adopting block type.If not then thread task, then signal to battery saving arrangement, enter electricity-saving state.Otherwise, retrieve the entry address of this task, perform this task.Because only have task ID in scheduler, there is not additional parameter, therefore task must be printenv function.Meanwhile, task is cooperating type, before after task must exit (now storehouse for empty) completely, next task could be performed.Therefore, all tasks all share same heap stack space.
The all basic tasks of TinyOS2.x are all printenv functions, and the task ID of each basic task only fixed allocation byte, this byte deposits task ready mark, does not have space to deposit other parameter.So it is a lamp system in essence.Compared to the message system that can attach some parameters, have some weakness, examination is listed below:
(1) task can not carry suction parameter: after task exits execution, and storehouse empties, and synchronous signal lamp system cannot carry or preserve parameter.Therefore, the scope of application of task is limited.Can only be made up with extra measure.As: with task realize from counting module.
(2) mission bit stream can not unified management: because lamp system cannot carry parameter, the information exchange system between external environment condition and each task, the external environment condition that places one's entire reliance upon and each task are decided through consultation voluntarily, do not have the expression means of unified standard.Therefore, for external environment condition and task, the information that exchanges between task and task, can not directly with unified means collect, monitor, filter, control, management.Can only be made up with extra measure.This debugging to software systems, test, control etc. are all great restrictions.
(3) active message can not be expressed completely: because lamp system cannot carry parameter, information exchange system need be decided through consultation separately by between environment and task, is not unified standard.The message sent can only notify that reception task has said that message occurs, but can not disposablely express complete completely.Therefore, the task of receiving information needs to depend on specific information exchange system, adopts and draws (Pull) model mechanism, by function call mode, fetch the concrete information content.For realizing complete reusable module and completely transparent distributed computing system, this is a fatal restriction (reason is aftermentioned), is difficult to make up.
TinyOS2.x task ID parallel join the team to go out with serial group time, all adopt the critical section Protection Code of block type.Owing to only there being the ID of a byte to join the team, therefore critical section can be passed through at a high speed, not too affects Response time and system performance.This is because it have employed very easy signal lamp mechanism.If according to system requirements, message mechanism be used into instead, except the synchronous deadlock of known obstructive type, pirority inversion, interruption can not lock, critical section can not be concurrent etc. except problem, also have other problem, examination is listed as follows:
(1) real-time performance problem: compared to the task ID of byte, message is generally longer, joins the team, goes out team and all need the long period, the critical section execution time can be caused to lengthen a lot.In general Single Chip Microcomputer (SCM) system, critical section protection is generally completed by pass interruption.Like this, system break response speed can be caused slack-off, influential system real-time performance, reduce entire system efficiency.
(2) hardware implementing problem: manage throughout in device and each software systems, the technological means realizing parallel critical section protection of joining the team is changeful, is not easy to derive succinct, highly efficient, unified parallel model of joining the team.Therefore, be not easy with hardware implementing key operation, assist a ruler in governing a country parallel joining the team, execution efficiency cannot be improved or bring other advantage.
TinyOS1.x and the general general-purpose operating system, in its scheduler program data structure, all directly preserve the entry address of mission function.After scheduler program is chosen this task, completed necessary preparation work, just leap to this address, with code of executing the task.Relative to the mode adopting task ID and ID address mapping table, have some shortcomings, examination is listed below:
(1) entry address implication is single: can not contain other significant information (as static priority).
(2) entry address is only meaningful in unit: after crossing over computing machine, this address is without any meaning.
Therefore, for requiring completely transparent distributed parallel task computation, be a fatal restriction.
TinyOS2.x uses the basic task ID of a byte, makes scheduling kernel succinctly efficient.But which has limited the maximum number of tasks that it can hold is 255, to large-scale a little, the more system of number of tasks, cannot hold process, influential system retractility.
TinyOS2.x uses the basic task ID of a byte, doubles as FIFO ready queue pointer and task ready mark.This is the same with other most operating system, all has the task PCB table of the non-zero length deposited in memory ram.Have some weakness, examination is listed below:
(1) execution efficiency problem: because needs carry out various operation (as task is transferred to ready state from waiting state) to task PCB table, the execution of this part scheduling code must increase.In the situation (as TinyOS) of very lightweight scheduling, relative to the scheduling overall execution time, the execution time that it increases is extra, more considerable, have impact on the execution efficiency of lightweight scheduling.
(2) hardware implementing problem: manage throughout in device and each software systems, the content of task PCB table, the various measure such as technology, optimization means of realization are ever-changing, are not easy succinct, the highly efficient, unified concurrent technology implementation model of derivation.Therefore, be not easy with hardware implementing key operation, assist a ruler in governing a country concurrent realization, execution efficiency cannot be improved or bring other advantage.
(3) space hold problem: owing to there being the task PCB table left in RAM, even if RAM use amount very small (as TinyOS2.x can levy the waiting state of task, ready state with single BIT bit table in essence), when memory ram is rare (as WSN system), if there is thousands of considerable task (case is aftermentioned), system can be caused cannot to realize concurrent scheduling process, become fatal technological deficiency, limit the scope of application of this technology.
When building TinyOS system, write as component with nesC language, carried out component connection by interface specification, and during program compilation, carried out static state assembling by the mode of function call.Therefore, in itself, that its component is externally announced is function name (the link phase is effective) function address (runtime is effective).With the component scheme comparison announcing ID, have many weakness, examination is listed below:
(1) modular model is inconsistent: TinyOS2.x task adopts ID scheme, and its component adopts address scheme.The two is inconsistent, there are 2 kinds of models, causes the model of its system basic module complicated.
(2) address scheme adaptability is weak: ID scheme is easier across language, across Heterogeneous systems, and universality is better.
(3) address scheme is difficult to dynamically adapting: within the code runtime, and except non-specifically is safeguarded, function address has had no way of following the trail of.And predefined ID component scheme, more easily carry out the quoting of code, change, replace, safeguard, more easily realize monolithic or the upgrading of overall code heat.
(4) function address is only meaningful in unit: after crossing over computing machine, this address is without any meaning.Therefore, for requiring completely transparent distributed parallel task computation, be a fatal restriction.
Current TinyOS system, structured programming, modularization programming, object based programming, componentization programming ... etc. various technology, when dressing up larger module by little module link-group, all adopt the mode of function call to complete.This mode has fatal defect, is in complicated software system, causes software module to be difficult to one of multiplexing most crucial problem.The following detailed description of:
Easy in order to describe, use two terms, under first simple declaration:
Draw (Pull) pattern and push away (Push) pattern, these two terms are a kind of information propagation pattern for representing on internet originally.Draw (Pull), refer to that user initiatively browses web sites information, information is fetched from (drawing) oneself interested website.Push away (Push), refer to that website initiatively sends (pushing away) to some specific user message.
A module, by calling the function be in another one module, obtains result.This function call, is also information access process, is similar to the process that network information draws, is therefore also referred to as pull-mode.If a module is concurrent entity (thread etc.), initiatively send message to the concurrent entity of another one.The process of this transmission message, is similar to the process that network information pushes, is therefore also referred to as push-model.
Pull-mode and push-model, the difference of most important meaning is: when drawing at every turn, with needing to specify the object pulled and the actual conditions pulling (content) per family; And when pushing away, do not need user to have any action (certainly before this, needing to do to work once a bit, as subscribed etc.) at every turn.
Referring to Fig. 2, for adopting two modules of pull-mode work.The called module of D module representative, other all parts except D module are the modules of carrying out initiatively function call.In order to analyze invoked procedure, the decomposition of function equivalent is carried out to calling module above.
In figure, the input parameter (message) required for In representation module, the information (message) that Out representation module exports, F module is the Core Feature that this module must complete, and B module is the other part of functions that this module completes.Therefore, in essence, the function of F+B is the meaning that this module exists.
C module represents direct function and calls, and be equivalent to the CALL instruction of collecting, the right of execution of CPU has directly forwarded in D module afterwards.In pull-mode, this is the link that must exist.D module needs certain parameter Pm.This parameter is via A module: namely obtain after parameter transformation, when C module is called, passes to D module in the lump.
A module carries out Parameter Switch, mainly to input parameter In, in conjunction with its dependent variable 1, carries out the work such as parameter format conversion, coupling, obtains the necessary parameter Pm of C module and the necessary parameter Pc of F module.
In some cases, in order to obtain parameter Pm and Pc, in A module, Parameter Switch must obtain other a part of information Pb.This part information Pb, obtains while formerly must completing a part of functions of modules (the preposition function of B).Therefore, the preposition function of B module is a non-existent not necessarily module of possible.If but exist, then obtain parameter Pf from A module, complete the functions of modules that part is predetermined, then feedback information Pb is to A module, meanwhile, when F nucleus module needs, possible parameter P is supplied to F module.
From the information Od that the called function of D module returns, associating related variable 2, after being arranged, is transformed into the parameter Pr that F module can directly utilize, passes to F corn module by E module information.
F module, after obtaining parameter Pc, Pr, P, completes Core Feature, obtains output information Out.
Parameter Pc, Pm are likely identical with parameter In, and such A module just may not need existence.The information Od that D module returns after carrying out called function, likely identical with parameter Pr, such E module just may not need existence.The function call of C module is the link that must exist in pull-mode.
As previously mentioned, for calling module, the function call in the parameter transformation in figure in A module, C module and the function of module itself have no relation.Because under being operated in pull-mode purely, in order to obtain information Pr, and the code placed wherein of having to.Poly-degree angle views in module, their existence, reduces the interior poly-degree of calling module.The preposition function of B module, poly-in pure code reuse and module, preferably also can separate calling module.E module carries out finish message, in some cases, in order to meet interface requirements, also can retain, but preferably also can peel off.From design angle, generally also should there is certain solution in addition, B module, E module are all peeled off away.Like this, when not adopting pull-mode to work, only remaining have F corn module, as the unique code of calling module.Like this, the highest reusability and the transplantability of module can just be reached.
As shown in Figure 2, in pull-mode, shortcoming the most fatal is: the function call (otherwise being not just pull-mode) of indivisible, the C module that must exist.Because C module clearly must list function name (or address) and parameter Pm, this part code must be embedded in calling module.Therefore, time transplanted at calling module, multiplexing, have to consider the impact of D module for calling module.In order to solve this impact, typical exist 3 kinds of methods:
(1) do not analyze, do not revise the called module of calling module and the representative of D module, the two is simultaneously overall multiplexing.
This is best solution, transplants multiplexing Least-cost, and efficiency, reliability are the highest.Problem is, the called module of calling module and the representative of D module, generally have other subordinate's module, unless this all subordinate's module (the stalk tree namely from calling module), whole integral transplanting is multiplexing, otherwise still will face reorganization and the adjustment of subordinate's module.Meanwhile, can the service logic of new projects intactly need this whole stalk tree just, is still a large problem.Like this, subtree transplants multiplexing scheme, and the scope of application just narrows greatly, only just suitable in very similar project, does not have a universality.
(2) do not analyze, do not revise calling module, the only input of analog D module, output and corresponding function.
This mode realizes relatively simple, but the professional service knowledge also will be familiar with involved by D module and model.If this professional knowledge compares leap, this is a no small burden inherently.
Meanwhile, this scheme also has a burden to be leave the useless code of farrago of useless.Wasting space and time, reduce the spatiotemporal efficiency of code.Systematic comparison is complicated and when requiring higher to spatiotemporal efficiency, this problem is more aobvious outstanding.Under extreme case, often impel designer simply to make a fresh start, again develop, existing module and code can not be utilized.
(3) analyze, revise calling module, change the input of D module, output and function, or simply cancel.
Thisly realize more complicated, needs are understood in detail and are understood the code logic of A module, B module, C module, E module and whole calling module, professional service knowledge and the model of calling module must be well understood, and, be familiar with the professional service knowledge involved by D module and model.If these 2 professional knowledges compare leap, it is exactly a no small burden.Meanwhile, analysis modify code, also designs closely related with original reusability.Poorly designed code or the code after repeatedly safeguarding reluctantly in the past, can be very chaotic, and reusability is very poor.Often impel designer simply to make a fresh start, again develop, existing module and code can not be utilized.
Summary of the invention
For the defect that prior art exists, the invention provides a kind of independent driving member and driving member composition model and component method for splitting can be run, effectively can overcome the weakness of existing " concurrent " actualizing technology, high efficient and reliable ground realizes " concurrent " technology, multiple programming, has the series of advantages such as pervasive, cheap, efficient, reliable, energy-conservation, multiplexing, transparent distribution, micro-kernel, inherent support target technology.
The technical solution used in the present invention is as follows:
The invention provides a kind of independent driving member composition model, described independent driving member composition model is set P={ the 1st layer of driving member, the 2nd layer of driving member subclass ... n-th layer driving member subclass }, wherein, n >=2; Each driving member in described n-th layer driving member subclass carries out component installaiton based on n-th layer virtual message bus, obtains the single driving member in (n-1)th layer of driving member subclass; Each driving member in described (n-1)th layer of driving member subclass carries out component installaiton based on (n-1)th layer of virtual message bus, obtains the single driving member in the n-th-2 layers driving member subclass; By that analogy, until each driving member in described 2nd layer of driving member subclass carries out component installaiton based on the 2nd layer of virtual message bus, the 1st layer of driving member is obtained;
Wherein, each layer driving member described in each in described set P meets identical agreement.
Preferably, in described set P, the 1st layer of driving member comprises respectively to each driving member in n-th layer driving member subclass: described virtual message bus, described interface operator ID mapping table, described another name chained list and more than one operator; Wherein, described interface operator ID mapping table is used for the corresponding relation of memory interface operator ID and entrance function; The corresponding relation of operator ID and described interface operator ID quoted by described another name chained list for storing; Wherein, described interface operator ID is the operator mark of described driving member self; Described quote operator ID be the driving member inside be articulated on messaging bus operator mark.
Preferably, each driving member in described n-th layer driving member subclass carries out component installaiton based on n-th layer virtual message bus, and obtain the single driving member in (n-1)th layer of driving member subclass, wherein, n >=3 are specially:
Each driving member in described n-th layer driving member subclass comprises n-th layer virtual message bus, n-th layer interface operator ID mapping table, n-th layer another name chained list and more than one n-th layer operator respectively; Single driving member in the (n-1)th layer of driving member subclass obtained after carrying out component installaiton comprises (n-1)th layer of virtual message bus, the (n-1)th layer interface operator ID mapping table, (n-1)th layer of another name chained list and more than one (n-1)th layer of operator;
When carrying out component installaiton, n-th layer virtual message bus described in each being carried out bus fusion, obtains (n-1)th layer of virtual message bus; N-th layer interface operator ID mapping table described in each is carried out form fusion, obtains the (n-1)th layer interface operator ID mapping table; N-th layer another name chained list described in each is carried out form fusion, obtains (n-1)th layer of another name chained list; N-th layer operator described in each is merged, obtains (n-1)th layer of operator.
Preferably, each driving member in described 2nd layer of driving member subclass carries out component installaiton based on the 2nd layer of virtual message bus, obtains the 1st layer of driving member and is specially:
Each driving member in described 2nd layer of driving member subclass comprises the 2nd layer of virtual message bus, the 2nd layer interface operator ID mapping table, the 2nd layer of another name chained list and more than one 2nd layer of operator respectively; Described 1st layer of driving member comprises the 1st layer of virtual message bus, the 1st layer interface operator ID mapping table, the 1st layer of another name chained list and more than one 1st layer of operator;
When carrying out component installaiton, the 2nd layer of virtual message bus described in each being carried out bus fusion, obtains the 1st layer of virtual message bus; 2nd layer interface operator ID mapping table described in each is carried out form fusion, obtains the 1st layer interface operator ID mapping table; 2nd layer of another name chained list described in each is carried out form fusion, obtains the 1st layer of another name chained list; 2nd layer of operator described in each is merged, obtains the 1st layer of operator.
Preferably, the corresponding relation quoting operator ID and described interface operator ID described in described another name chained list storage is equivalent mapping relations.
Preferably, described independent driving member composition model is built-in with collaborative concurrent type frog bus interface, and described collaborative concurrent type frog bus interface is for being articulated to collaborative concurrent type frog bus.
Preferably, described collaborative concurrent type frog bus comprises: data obtaining module, parallel ring distributor, linear memory block, message packing module, parallel join the team device, message queue pool, Queue sequence manager, entry maps table and system stack;
Wherein, described data obtaining module is used for obtaining target operator ID and message-length value from the message of the pending external parallel received; Wherein, described target operator ID is the operator mark processing described message; Simultaneously for obtaining the additional management message length value of additional management message, then calculate described additional management message length value and the described message-length value that gets and, obtain message and to take up room value; Wherein, described additional management message length value >=0;
Described parallel ring distributor is the space ring distributor that unblock formula walks abreast, described message for getting according to described data obtaining module takes up room value, by annular division principle continuously linear memory block described in dynamic scribing, unblock formula obtains taking up room with message the identical empty message groove of value concurrently;
Described message packing module is used for the described empty message groove described message and described additional management message being filled into described parallel ring distributor distribution, obtains non-blank-white message-slot;
Described parallel device of joining the team to walk abreast enqueue operations for carrying out unblock formula to described empty message groove or described non-blank-white message-slot;
Described message queue pool is used for the still untreated message of having joined the team of buffer memory;
Described Queue sequence manager for selecting the appointment message that need process from described message queue pool according to preset schedule strategy, that works in coordination with described appointment message goes out team's operation;
Described entry maps table, searches described entry maps table according to described target operator ID, obtains the function entrance address corresponding with described target operator ID; According to the appointment message-slot address of described function entrance address and described appointment message, call corresponding operator and perform function, thus process out the described appointment message of team;
Described system stack is the stack space that in described collaborative concurrent type frog bus, all operators are shared; Each operator system stack space of sharing mutually cover, be eclipsed form, i.e. non-laminated formula;
Further, the operator in described collaborative concurrent type frog bus only has ready state, even if when there is not any message in described collaborative concurrent type frog bus, the operator in described collaborative concurrent type frog bus is still in ready state; Arrive message once in described collaborative concurrent type frog bus, and when the operator corresponding to this message is scheduled, the operator be scheduled for obtains processor immediately.
Preferably, described message is fixed length message or elongated message.
Preferably, when the least significant end scribing empty message groove of described parallel ring distributor at described linear memory block, if the most remaining free space of end of described linear memory block is less than described message and takes up room value, the most remaining free space of end described in then directly giving up, the described most remaining free space of end forms discarded groove.
Preferably, first described message and described additional management message are filled into the described empty message groove that described parallel ring distributor distributes by described message packing module, obtain non-blank-white message-slot; Then described parallel device of joining the team carries out the unblock formula enqueue operations that walks abreast to described non-blank-white message-slot and is specially:
Described parallel ring distributor is configured with the first head pointer and the first tail pointer, when needing to distribute new empty message groove, after the first tail pointer of current location, directly mark the space that the value that to take up room with described message is identical, obtain described new empty message groove, and then described first tail pointer unblock formula is walked abreast move to the afterbody of described new empty message groove;
Described parallel device of joining the team is configured with the second head pointer and the second tail pointer; Realize carrying out unblock formula to described non-blank-white message-slot by parallel mobile described second tail pointer of unblock formula to walk abreast enqueue operations;
Wherein, the first head pointer of described parallel ring distributor configuration and the first tail pointer are different from the second head pointer and second tail pointer of described parallel device configuration of joining the team.
Preferably, first described parallel device of joining the team carries out unblock formula to described empty message groove and to walk abreast enqueue operations, and then described message packing module fills described message to the described empty message groove of joining the team and described additional management message is specially again:
Described parallel ring distributor shares identical head pointer and tail pointer with described parallel device of joining the team, while described parallel ring distributor distributes empty message groove from described linear memory block, this empty message groove is also performed enqueue operations by described parallel device of joining the team; Then described message packing module fills described message and described additional management message to the described empty message groove of joining the team again.
Preferably, under environment of trying to be the first, before described parallel ring distributor distributes empty message groove from described linear memory block, make described empty message groove be in dormant state in advance, wherein, the empty message groove being in dormant state is called sleep messages groove; Then described message packing module fills described message and described additional management message in described sleep messages groove, after filling completes, when described sleep messages groove is activated, namely change active state into, wherein, the message-slot being in active state is called alive message groove; Wherein, sleep messages groove is the message-slot that can not be performed to operator by described collaborative concurrent type frog bus scheduling; Alive message groove is the message-slot belonging to described collaborative concurrent type frog bus normal consistency scope.
Preferably, when adopting elongated message, whether be the described sleep messages groove of 0 differentiation and alive message groove by the message-length parameter write in message-slot; When the message-length parameter write in described message-slot is 0, this message-slot is described sleep messages groove; When the message-length parameter write in described message-slot is not 0, this message-slot is described alive message groove.
Preferably, also comprise: supervision and management center; Described supervision and management center is used for the message to described collaborative concurrent type frog bus inside, carries out centralized watch, analysis, control, filtration and management.
Preferably, also comprise: space reclamation module; Described space reclamation module goes out the message after team itself and described message-slot for reclaiming in described collaborative concurrent type frog bus.
Preferably, also comprise: battery saving arrangement; Described battery saving arrangement is used for: when there is not message in described collaborative concurrent type frog bus, and notice uses the application system of this collaborative concurrent type frog bus to carry out energy-saving distribution immediately.
The present invention also provides a kind of run driving member composition model based on above-mentioned independent driving member composition model, also comprises the 0th layer of driving member in described set P; Described 1st layer of driving member Effect-based operation bus carries out component installaiton, obtains described 0th layer of driving member.
Preferably, described 0th layer of driving member comprises: described messaging bus, the 0th layer interface operator ID mapping table, the 0th layer of another name chained list and more than one 0th layer of operator; Described 1st layer of driving member comprises the 1st layer of virtual message bus, the 1st layer interface operator ID mapping table, the 1st layer of another name chained list and more than one 1st layer of operator;
Described 1st layer of driving member carries out component installaiton based on described messaging bus, obtains described 0th layer of driving member and is specially:
When carrying out component installaiton, described 1st layer of virtual message bus being carried out bus fusion, obtains described messaging bus; Described 1st layer interface operator ID mapping table is carried out form fusion, obtains the 0th layer interface operator ID mapping table; Described 1st layer of another name chained list is carried out form fusion, obtains the 0th layer of another name chained list; Described 1st layer of operator is merged, obtains the 0th layer of operator.
The present invention also provides a kind of and carries out component method for splitting to above-mentioned run driving member composition model, comprises the following steps:
Preset component and split rule, when described run driving member composition model meet described component split rule time, by described component split rule split described in can run driving member composition model.
Preferably, described component splits rule and is: when the scheduler program of described messaging bus is performed by two or more kernel or processor, described messaging bus is split into the sub-bus of the distributed equity identical with described number of cores or described processor quantity; The described driving member described in each of each layer in driving member composition model that run is articulated in corresponding described sub-bus respectively; Or
Described component split rule for: the load that can run each driving member in driving member composition model described in dynamic statistics, according to the load balancing principle preset, is dynamically split into the multiple sub-bus of distributed equity by described messaging bus; Describedly run the driving member described in each of each layer in driving member composition model or operator is articulated in corresponding described sub-bus respectively; Or
Described component split rule for: the Energy Efficiency Ratio can running in driving member composition model each initiatively structure described in dynamic statistics, according to the energy-saving principle preset, is dynamically split into the multiple sub-bus of distributed equity by described messaging bus; Describedly run the driving member described in each of each layer in driving member composition model or operator is articulated in corresponding described sub-bus respectively; Or
Described component split rule for: the crash rate can running each driving member in driving member composition model described in dynamic statistics, according to the reliability principle preset, is dynamically split into the multiple sub-bus of distributed equity by described messaging bus; Describedly run the driving member described in each of each layer in driving member composition model or operator is articulated in corresponding described sub-bus respectively.
Beneficial effect of the present invention is as follows:
Independent driving member provided by the invention and can run driving member composition model and component method for splitting, carries out component installaiton by multiple little driving member, finally obtains the large driving member that the component agreement of driving member little of each is identical.Large driving member completely eliminates and calls dependence to the little driving member of subordinate, makes the loose contact only had between component in data.Embody rule environment can be departed from, complete independently member function.Make component can succinctly, expeditiously multiplexing, reconstruct, combination, make whole construction system possess height reusability.
Accompanying drawing explanation
The structural representation of the TinyOS2.x basic task scheduler that Fig. 1 provides for prior art;
The pull-mode minor function that Fig. 2 provides for prior art calls equivalent model schematic diagram;
Fig. 3 is a kind of component installaiton example schematic provided by the invention;
Fig. 4 is the universal model schematic diagram of collaborative concurrent type frog messaging bus provided by the invention;
Fig. 5 is a kind of embody rule model schematic of collaborative concurrent type frog messaging bus provided by the invention.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in detail:
Embodiment one: independent driving member composition model
As shown in Figure 3, the invention provides a kind of independent driving member composition model, described independent driving member composition model is set P={ the 1st layer of driving member, the 2nd layer of driving member subclass ... n-th layer driving member subclass }, wherein, n >=2; Each driving member in described n-th layer driving member subclass carries out component installaiton based on n-th layer virtual message bus, obtains the single driving member in (n-1)th layer of driving member subclass; Each driving member in described (n-1)th layer of driving member subclass carries out component installaiton based on (n-1)th layer of virtual message bus, obtains the single driving member in the n-th-2 layers driving member subclass; By that analogy, until each driving member in described 2nd layer of driving member subclass carries out component installaiton based on the 2nd layer of virtual message bus, the 1st layer of driving member is obtained;
Wherein, each layer driving member described in each in described set P meets identical agreement.In the present invention, multiple little driving member is carried out component installaiton, finally obtain the large driving member that the component agreement of driving member little of each is identical.Large driving member completely eliminates and calls dependence to the little driving member of subordinate, makes the loose contact only had between component in data.Embody rule environment can be departed from, complete independently member function.Make component can succinctly, expeditiously multiplexing, reconstruct, combination, make whole construction system possess height reusability.
Wherein, gather the 1st layer of driving member in P to comprise respectively to each driving member in n-th layer driving member subclass: described virtual message bus, described interface operator ID mapping table, described another name chained list and more than one operator; Wherein, described interface operator ID mapping table is used for the corresponding relation of memory interface operator ID and entrance function; The corresponding relation of operator ID and described interface operator ID quoted by described another name chained list for storing; Wherein, described interface operator ID is the operator mark of described driving member self; Described quote operator ID be the driving member inside be articulated on messaging bus operator mark.
Illustrate below and quote operator ID, another name chained list and interface operator ID mapping table:
(1) operator ID is quoted:
When component exists with the form of source code or middle database separately, what this component inside referred to quotes operator ID, be only the symbolic name of confirmation to be connected, after multiple relevant component compiles connection together with configuration file, those are quoted operator ID and are just assigned as formal ID value or variable.
(2) chained list is called
Another name chained list is for storing the corresponding relation quoting operator ID and described interface operator ID.Preferably, calling the corresponding relation quoting operator ID and described interface operator ID described in chained list storage is equivalent mapping relations.Another name chained list operation is exactly inform compiler, and other referring in certain component quotes operator ID, should link together with the interface operator ID of which component.Be exactly determine and delineate due data communication between component with component to be connected, with the predetermined function of completion system in essence.
During another name link, only quoting operator ID and predetermined interface operator ID binds together, be indifferent to the entrance function of operator and parameter thereof and message format.Whether the parameter of two entrance functions mates with form with the concrete specification of message, is judged voluntarily and decision-making by application system, thus gives component linked operation with maximum degree of freedom.Generally when component static compilation connects, can be checked by compiler; Also when system dynamic operation, confirmation can be checked voluntarily by operator.
The specific implementation of another name link is very simple, is only that handle quotes ID variable and known ID Variable-Bindings is same numerical value or variable, can have operated with the ALIAS OPERATIONS in programming language or assignment.Illustrate, refId is for quoting operator ID, and calcId is known interface operator ID.Be embodied as with C Plus Plus: aID_t & refId=calcId; Be embodied as by C language: aID_trefId=calcId.
(3) interface operator ID mapping table
Interface operator ID mapping table is used for the corresponding relation of memory interface operator ID and entrance function.
Message ingress function in component can be separated with interface operator ID.Also namely, the functional realiey code section of component, can not comprise the title of interface operator ID, only comprises the code of entrance function.The binding mapping of the two, can a delayed step, when component or system assembles, completes together with linking with another name.Multiple interface operator ID, can map and be oriented to same entrance function.This is when realizing static state and quoting multi-instance object, valuable.
Wherein, virtual message bus be one in logic, notional bus, need not actually go to be concerned about coding, be not a bus physical clearly occurred separately.Component is always plugged in certain root bus, and by calling bus api function, component is articulated in bus with the substantive requirements of form of hard coded.But when component exists with the form of source code or middle database separately, root bus is unactual links together with certain for this component, the code packages of bus is not contained in component.Only have after the compiling completing whole bus node or whole system connects, component just links together with the code of certain root bus, become a hanging member of bus, when independent driving member composition model is articulated in bus, become and can run driving member composition model, this will introduce in embodiment two.Component hypothesis oneself is in operation in a bus, but this root bus does not temporarily also exist, and is therefore referred to as virtual message bus.It does not exist at component inside, does not affect the independence of component.
Each driving member in described n-th layer driving member subclass carries out component installaiton based on n-th layer virtual message bus, and obtain the single driving member in (n-1)th layer of driving member subclass, wherein, n >=3 are specially:
Each driving member in described n-th layer driving member subclass comprises n-th layer virtual message bus, n-th layer interface operator ID mapping table, n-th layer another name chained list and more than one n-th layer operator respectively; Single driving member in the (n-1)th layer of driving member subclass obtained after carrying out component installaiton comprises (n-1)th layer of virtual message bus, the (n-1)th layer interface operator ID mapping table, (n-1)th layer of another name chained list and more than one (n-1)th layer of operator;
When carrying out component installaiton, n-th layer virtual message bus described in each being carried out bus fusion, obtains (n-1)th layer of virtual message bus; N-th layer interface operator ID mapping table described in each is carried out form fusion, obtains the (n-1)th layer interface operator ID mapping table; N-th layer another name chained list described in each is carried out form fusion, obtains (n-1)th layer of another name chained list; N-th layer operator described in each is merged, obtains (n-1)th layer of operator.
Each driving member in described 2nd layer of driving member subclass carries out component installaiton based on the 2nd layer of virtual message bus, obtains the 1st layer of driving member and is specially:
Each driving member in described 2nd layer of driving member subclass comprises the 2nd layer of virtual message bus, the 2nd layer interface operator ID mapping table, the 2nd layer of another name chained list and more than one 2nd layer of operator respectively; Described 1st layer of driving member comprises the 1st layer of virtual message bus, the 1st layer interface operator ID mapping table, the 1st layer of another name chained list and more than one 1st layer of operator;
When carrying out component installaiton, the 2nd layer of virtual message bus described in each being carried out bus fusion, obtains the 1st layer of virtual message bus; 2nd layer interface operator ID mapping table described in each is carried out form fusion, obtains the 1st layer interface operator ID mapping table; 2nd layer of another name chained list described in each is carried out form fusion, obtains the 1st layer of another name chained list; 2nd layer of operator described in each is merged, obtains the 1st layer of operator.
Concrete, during component installaiton, virtual message bus is a logical concept, need not actually go to be concerned about coding.Therefore, reality has only needed interface operator ID mapping table and another name chained list, and both can be placed in same configuration file.Therefore, component installaiton operates, and just simplifies corresponding to completing a succinct configuration file.And the operator function code of reality, then can deposit in an operator function storehouse.Between the operator function in this storehouse, without any mutual call relation, be only simply enumerate relation, everybody walks abreast and is present in same storehouse.
Content in configuration file, just simply enumerates: the corresponding corresponding relation closing, quote operator ID and interface operator of interface operator ID and entrance function.The quoting of component, split, revise, huge profit with etc., be also all only the corresponding relation changed wherein, very simple and clear.When needs comprise another one component completely, when making it become the part of oneself, as long as simply the configuration file of this component is comprised to come in, do not need the function code part changing it.
Concurrent operator as the most basic basic component, can assemble formation more greatly, more senior component.After the larger component of composition, based on operator between, still there is no direct function calling relationship, only have data communication relation, still keep the feature of being carried out message communication each other by bus.By the another name chained list of a local, the data cube computation between each operator determining component inside and communication relation.Due to the scheduling message efficiency of this messaging bus, close to or call identical with other order of assembly level call, therefore, a large amount of operators existed seldom maybe can not reduce the operational efficiency of system.
As shown in Figure 3, be component installaiton example schematic provided by the invention, as can be seen from the figure, component 3, component 4 need the component Ca that composition one is large, and component Ca is the independent driving member that the embodiment of the present invention one provides; Then component Ca and component 1, component 2 need to form larger component Cb, and component Cb is the run driving member that the embodiment of the present invention two provides; .Data transfer relationship between component 1, component 2, component 3, component 4, as shown in left-half in figure; The component assembling operating structure of actual composition, as shown in right half part in figure.
The function code of component 1, component 2, component 3, component 4 reality, walks abreast and deposits in an other operator function storehouse, need not be concerned about.The configuration file content of component Ca comprises: the corresponding relation of the entrance function in operator ID3a, ID3b and component 3, the corresponding relation of the entrance function in operator ID4 and component 4; The corresponding relation that ID4 quoted by component 3, ID3b quoted by component 4; The operator that the operator ID3a of external announcement, component 4 are quoted.The deploy content of component Cb is similar, repeats no more.
Above, the universal model of independent driving member composition model and concrete case study on implementation is described.
In addition, independent driving member composition model provided by the present invention, is built-in with bus interface, for being articulated to bus, thus independent driving member is converted into can run driving member.The present invention also provides a kind of special case of bus interface, that is: collaborative concurrent type frog bus interface, and this collaborative concurrent type frog bus interface is for being articulated to collaborative concurrent type frog bus.Below the collaborative concurrent type frog bus of concurrent type frog bus interface adaptation provided by the invention and collaborative is described in detail:
As shown in Figure 4, the invention provides the collaborative concurrent type frog messaging bus of a kind of and collaborative concurrent type frog bus interface adaptation, the concurrent universal model of this messaging bus is: parallel to join the team, work in coordination with out team, namely enters singly to go out model more.Message, before entering message queue pool, all belongs to the parallel work-flow of unblock formula; After entering message queue pool, belong to cooperating type serial operation.Specifically comprise: information extraction modules, parallel ring distributor, linear memory block, message packing module, parallel join the team device, message queue pool, Queue sequence manager, entry maps table and system stack.Below above-mentioned each parts are described in detail:
(1) data obtaining module
Data obtaining module is used for obtaining target operator ID and message-length value from the message of the pending external parallel received; Wherein, described target operator ID is the operator mark processing described message.Simultaneously for obtaining the additional management message length value of additional management message, then calculate described additional management message length value and the described message-length value that gets and, obtain message and to take up room value; Wherein, described additional management message length value >=0.
It should be noted that, operator one word adopted in the present invention is the free translation of English computer term Actor, is generally translated as " role ".Individual feels " operator " this concept of using in mathematics to characterize the implication of Actor more accurately.Therefore in this article, all adopt " operator " this term as the Chinese translation word of English Actor.
From dispatching efficiency, operator is a concurrent entity than task, process, thread more lightweight, than call back function more heavyweight some.Be equivalent to fine journey, association's journey, than fine journey, association's journey, lightweight is a bit slightly.In the present invention, the operator in messaging bus only has ready state, even if when there is not any message in described messaging bus, the operator in described messaging bus is still in ready state; Arrive message once in described messaging bus, and when the operator corresponding to this message is scheduled, the operator be scheduled for obtains processor immediately.
Wherein, target operator ID can distribute arrangement simply in order, also can imply some other meanings, as: priority, fixed service number, No. D, distributed I ... etc.Such as: simply target operator ID can be divided into two parts: the operator number in external bus node number and messaging bus.By this kind of structure, only simply need replace certain the target operator ID quoted, just easily can abandon quoting local operator then quoting the operator be present on another one external node, realize transparent Distributed Calculation and transfer.More complicated division methods, even can use the IP address being similar to Internet and divide concept, realize more complicated Distributed Application logic.
In the messaging bus of practicality, the target operator ID in message, generally can be concealed with other useful information (as outside segments period).Therefore, need correct local target operator ID to change clearly to extract.Some other parameters that message inside comprises, also likely need to carry out unified format match and conversion.Therefore, need to carry out parameter extraction and format conversion.Normal outcome is, obtains a correct target operator ID, and the first address of message (groove).
(2) parallel ring distributor
Parallel ring distributor is the space ring distributor that unblock formula walks abreast, described message for getting according to described data obtaining module takes up room value, by annular division principle continuously linear memory block described in dynamic scribing, unblock formula obtains taking up room with message the identical empty message groove of value concurrently.
When exist multiple wait join the team message time, then linear memory block is divided into multiple message-slot (Slot) by parallel ring distributor dynamically, each message-slot just in time holds a complete message, certainly, according to the actual requirements, in message-slot also can containment management other additional information.These message-slot are by adjacent continuous ground allocate and recycle.Therefore, logically see, linear memory block becomes the slot space of annular.When the least significant end scribing empty message groove of described parallel ring distributor at described linear memory block, if the most remaining free space of end of described linear memory block is less than described message and takes up room value, the most remaining free space of end described in then directly giving up, the described most remaining free space of end forms discarded groove, thus ensure space that each message-slot uses be plane, linearly, not wraparound, make operator and application program is succinct to the logical view of slot space, clean, nature.
This parallel ring distributor is the parallel space ring distributor of efficient, a succinct unblock formula, relative to block type divider, eliminate deadlock, pirority inversion, interruption can not lock, critical section can not be concurrent etc. problem; With free Software-only method, realize distributing without lock; With the hardware approach of cheapness, the N-free diet method realizing high efficiency wall scroll assembly instruction distributes.Concrete, interrupt mask, CAS/CAS2, LL/SC processor primitive can be utilized ... etc., with Software-only method, by distributing without lock (Lock-Free) algorithm; Also can utilize hardware, directly realize same function, obtain the effect of N-free diet method (Wait-Free) algorithm, obtain the effect that high-level efficiency is distributed: an assembly instruction can complete allocation of space simultaneously.What realize with pure software waits until aftermentioned without lock algorithm.
(3) linear memory block
This linear memory block, should be enough large as message buffering district.In modern conventional application program, except the Memory Allocation of regular length, conventional Action logic and instructional criterion are: all distribute remaining internal memory ram space as stack space.Corresponding, use messaging bus provided by the invention application system in, should conversely first stationary applications system storehouse size, then remaining internal memory ram space is all distributed as message buffering district.This is because operator Actor concurrent is in a large number the main body of composition native system, therefore there is a large amount of uncertain message, needs a large amount of uncertain message buffering.And meanwhile, in this application system, the function level that each operator calls is not many especially, and be generally all very simply directly call, and the stack space of all operators is all overlapped because of collaborative execution, therefore the maximum RAM stack space used required for easily estimating, therefore can distribute as the memory ram of regular length.
If message buffering district is large not, causes application system to be overflowed at run duration, then cannot receive new information again and join the team, and cause the system failure or collapse.Then the treatment principle of this mistake is: be responsible for process voluntarily by this application system; Or again expand message buffering district, or the processing logic of amendment application system, or directly allow application system shut down ... etc.This fault handling scheme all fours of system stack being overflowed with modern conventional application program.By adopting such logical and mechanism, this messaging bus an is unloaded responsibility natively should assured voluntarily by user: the application system that unconditionally guarantees is not washed away by data in enormous quantities.Thus enormously simplify design logic and the code of this messaging bus, obtain software and hardware adaptability and transplantability the most widely.
In order to increase the universality of messaging bus, for the inner structure of the message that messaging bus transmits, the present invention only makes minimum regulation: message is divided into fixed length message and elongated message; For the application system of fixed length message, be generally used for relatively special applied environment, as ATM switch etc. similar applications.For elongated messages application system, be most widely used, there is the most general use value.
For fixed length message and elongated message, target operator ID all must be comprised; In addition, for fixed length message, message-length value, by embody rule system and messaging bus self-defining thereof, need not appear in message structure clearly; For elongated message, message-length value then must appear in message structure clearly.The length of message-length value and target operator ID itself, closely related with processor word size, by embody rule system and messaging bus self-defining thereof, be generally recommended as the bytes such as 1,2,4,8,16, but which kind of length is cogent provision do not adopt.Whether the total length of single message, its inside comprise other management information (as dynamic priority) ... etc., be also by embody rule system and messaging bus self-defining thereof.
(4) message packing module
Described message packing module is used for the described empty message groove described message and described additional management message being filled into described parallel ring distributor distribution, obtains non-blank-white message-slot.
Be that after parallel arbitrary message i carries out space distribution, assignment messages groove, namely this message-slot space is occupied by this message individual at parallel ring distributor.Therefore, can process arbitrarily this message-slot.Now, message padding can be carried out.Even if this stage has very long time delay, system other parts are also had no effect.
Concrete, message packing module can adopt following two schemes to carry out message filling:
(1) the first scheme: first fill, after join the team:
Concrete, first described message and described additional management message are filled into the described empty message groove that described parallel ring distributor distributes by message packing module, obtain non-blank-white message-slot; Then described parallel device of joining the team carries out the unblock formula enqueue operations that walks abreast to described non-blank-white message-slot and is specially:
Described parallel ring distributor is configured with the first head pointer and the first tail pointer, when needing to distribute new empty message groove, after the first tail pointer of current location, directly mark the space that the value that to take up room with described message is identical, obtain described new empty message groove, and then described first tail pointer unblock formula is walked abreast move to the afterbody of described new empty message groove;
Described parallel device of joining the team is configured with the second head pointer and the second tail pointer; Realize carrying out unblock formula to described non-blank-white message-slot by parallel mobile described second tail pointer of unblock formula to walk abreast enqueue operations;
Wherein, the first head pointer of described parallel ring distributor configuration and the first tail pointer are different from the second head pointer and second tail pointer of described parallel device configuration of joining the team.
(2) first scheme: first to join the team, rear filling:
First parallel device of joining the team carries out unblock formula to described empty message groove and to walk abreast enqueue operations, and then described message packing module fills described message to the described empty message groove of joining the team and described additional management message is specially again:
Described parallel ring distributor shares identical head pointer and tail pointer with described parallel device of joining the team, while described parallel ring distributor distributes empty message groove from described linear memory block, this empty message groove is also performed enqueue operations by described parallel device of joining the team; Then described message packing module fills described message and described additional management message to the described empty message groove of joining the team again.
In addition, under environment of trying to be the first, before described parallel ring distributor distributes empty message groove from described linear memory block, make described empty message groove be in dormant state in advance, wherein, the empty message groove being in dormant state is called sleep messages groove; Then described message packing module fills described message and described additional management message in described sleep messages groove, after filling completes, when described sleep messages groove is activated, namely change active state into, wherein, the message-slot being in active state is called alive message groove; Wherein, sleep messages groove is can not be dispatched by described messaging bus the message-slot performed to operator; Alive message groove is the message-slot belonging to described messaging bus normal consistency scope.
The way that general employing increases Management flag in message-slot distinguishes sleep messages groove and alive message groove.As a kind of simplified way, Management flag can be hidden in out of Memory, thus save ram space.Such as: when adopting elongated message, useful message-length is certainly non-vanishing; Whether therefore, can arrange, be the described sleep messages groove of 0 differentiation and alive message groove by the message-length parameter write in message-slot; When the message-length parameter write in described message-slot is 0, this message-slot is described sleep messages groove; When the message-length parameter write in described message-slot is not 0, this message-slot is described alive message groove.Like this, as long as message-length parameter is written in message-slot instantaneously, this message-slot can be activated.
(5) parallel device of joining the team
Parallel device of joining the team to walk abreast enqueue operations for carrying out unblock formula to described empty message groove or described non-blank-white message-slot.
Concrete, parallel device of joining the team is that message walks abreast and turns the critical component of serial, needs the parallel behavior of mutually seizing of the thriving encoding operation of extreme care, crosses this then, transfer serial behavior collaborative very easily to.Because this messaging bus enters one singly to go out model more, therefore walk abreast and join the team device when specific implementation, under most application scenario, can according to actual conditions simple implementation model.
Parallel device of joining the team is that efficient, a succinct unblock formula walks abreast the parts of joining the team, and to join the team device relative to block type, eliminate deadlock, pirority inversion, interruption can not lock, critical section can not be concurrent etc. problem; With free Software-only method, realize without locking team; With the hardware approach of cheapness, the N-free diet method realizing high efficiency wall scroll assembly instruction is joined the team.Concrete, interrupt mask, CAS/CAS2, LL/SC processor primitive can be utilized ... etc., with Software-only method, realize carrying out enqueue operations without lock (Lock-Free) algorithm; Also can utilize hardware, directly realize same function, obtain the effect of N-free diet method (Wait-Free) algorithm, obtain the effect that high-level efficiency is joined the team: an assembly instruction can complete enqueue operations simultaneously.The unblock formula of chained list, particularly without lock enqueue operations, existing a lot of open paper statements, does not repeat them here.The specific implementation of parallel device of joining the team, with concrete structure and the realization of the message queue pool of bus inside, closely related.Under normal circumstances, be the single or multiple single linked list containing pointer end to end of operation, afterbody completed to it and to walk abreast enqueue operations.For reducing the complicacy of parallel work-flow, also can arrange the single linked list queue that special, only for the parallel enqueue operations turning serial; Afterwards, then to this also go here and there queue and carry out follow-up management operation.In particular cases, join the team and can have other special solution.Hereinafter by description special concise model.
(6) message queue pool
Message queue pool is used for the still untreated message of having joined the team of buffer memory.
Message queue pool is the kernel data structure district of this messaging bus, for all still untreated message of having joined the team of buffer memory, complement filter, management, dispatch, selecting should the message of priority processing.Owing to being at this moment co-operating completely, therefore, various management and running algorithm can be designed unsophisticatedly.
The specific implementation of message queue pool, closely related with concrete application system.Under normal circumstances, be a single linked list containing pointer end to end, simple dispatching algorithm can be realized, as: fifo fifo (FirstInFirstOut) algorithm, simple prioritizing algorithm ... etc.Under complex situations, such as, multiple simple dispatching algorithm is present in a system simultaneously, at this moment need to use multiple single linked list, to realize the dispatching algorithm of relative complex, as: time-optimized dynamic priority algorithm, EDF (EarliestDeadlineFirst) algorithm of cut-off priority of task the earliest ... etc.In particular cases, the data structure using double linked list, hash table etc. complicated may be needed, the function special with completion system and requirement.
In the present invention, for message queue pool, adopt zero PCB, thus simplify models of concurrency, make this messaging bus possess adaptability the most widely.More crucial, effectively can save ram space.Realize for concurrent application system for this messaging bus of application, due to component installaiton, once having thousands of operator Actor is very normal thing.Therefore, zero PCB just makes operator quantity have no to associate with taking of ram space.No matter there are how many operators, the ram space shared by it is at all constant.Like this, this messaging bus just can be applied to the rare occasion of various RAM easily, as: in WSN application system.
Zero PCB, means that operator no longer expresses to dynamic the various states of its task, therefore arranges: the operator in bus, no longer includes waiting status, and only there is ready state and running status.Even if when there is not any message in messaging bus, the operator in messaging bus is also in ready state.And when arriving message in messaging bus, obtaining processor after the operator of ranking in messaging bus immediately, thus changing running status into.Therefore, whether whole application system is in waiting status, depends on whether messaging bus inside exists message.This establishes for energy saving of system and has found the deep Theory and technology strong point.
Zero PCB, means that general operator dynamically can be expressed without ram space.But this does not get rid of operator or the queue of some special purposes, can take a lot of ram space, also, non-zero PCB is adopted to express.Such as: in EDF queue, the closing time of each real-time operator is recorded.
Therefore, the task control block (TCB) PCB of RAM zero-length, i.e. zero PCB, relative to the task PCB of non-zero length in memory ram, decrease the scheduled for executing time, define efficient, succinct, unified concurrent basic model, decrease taking of ram space, make this concurrent basic model generally can be applicable to any existing computer architecture.
(7) Queue sequence manager
Queue sequence manager for selecting the appointment message that need process from described message queue pool according to preset schedule strategy, that works in coordination with described appointment message goes out team's operation.
Concrete, Queue sequence manager, utilizes message queue pool, various dispatching algorithms etc., carries out management and running to all still untreated message of having joined the team.Such as: the priority that message is set, prepreerence message is placed in head of the queue, is convenient to message and goes out team.Wherein, when selecting head of the queue, very simply from the head of queue, message extraction can be marked.If there is multiple queue, then need first to select prepreerence queue.Due to the general more complicated of message format, unpredictable, the address of message-slot therefore also can be extracted simply as message addresses.For the simplest fifo algorithm, Queue sequence manager even can be accomplished not occur with clear and definite independently form, but lies in other associated mechanisms and code.After Queue sequence manager being positioned over parallel device of joining the team, complexity can being avoided, loaded down with trivial details, dangerous parallelly seize operation.Owing to being now co-operating completely, therefore, various management and running algorithm can be designed unsophisticatedly.
(8) entry maps table
Entry maps table, searches described entry maps table according to described target operator ID, obtains the function entrance address corresponding with described target operator ID; According to the appointment message-slot address of described function entrance address and described appointment message, call corresponding operator and perform function, thus process out the described appointment message of team.
Entry maps table is used for the mapping relations of storage operators ID function entry address, searches entry maps table, can obtain the function entrance address corresponding with target operator ID, so that next step jumps to this porch, perform the function of this operator according to target operator ID.This is actually other indirect address redirect mechanism of an assembly level.This entry maps table, be generally one by operator ID order, the address table that arranges from small to large, operator ID itself generally appears at this table inside ambiguously.In order to compression duty enters the size of oral thermometer, make full use of space, operator ID generally adopts continuous print coded system.
For saving ram space, adapting to the application system that ram space is rare, this entry maps table can be left in ROM.This entry maps table also can imply subsidiary or clearly list some other useful informations, as: the static priority etc. of operator.Owing to being now co-operating, therefore, even at program run duration, also can revise this entry maps table easily harmoniously, realize the run duration heat upgrading of system code.This, for the highly reliable system of 24 hours * 7 days/week, operation continuously, has very great realistic price.In addition, the mapping relations of entry maps table storage operators ID function entry address, for the scheme adopting task entry address, can cross over computing machine and indicate parallel operator, directly support completely transparent Distributed Parallel Computing.Support the code heat upgrading of run duration.
(9) system stack with execute the task
System stack is the stack space that in described messaging bus, all operators are shared; Each operator system stack space of sharing mutually cover, be eclipsed form, i.e. non-laminated formula.
According to the function entrance address obtained and message (groove) first address, directly call the execution function of this operator above.Compared with TinyOS2.x, maximum difference is, the technical program, when performing, carries message pointer; Therefore, become active message pattern, the information transfer mechanism of push-model can be realized.After an operator exits completely, the stack space shared by it also empties completely.Because intrasystem all operators are all collaborative execution, therefore, they all share identical system stack space.Also namely, the stack space of all operators is overlapping, and relative to stacked task stack, the cooperative system storehouse of overlap provided by the invention, substantially reduce the number taking of RAM stack space, makes system have more universality; Be convenient to the maximum use amount assessing stack space, be convenient to the work of ram space allocation manager.At operator run duration, it is completely privately owned that message (groove) belongs to this operator.Therefore, when not hindering bus to run, operator can process arbitrarily this message.As: repeat or preferentially use, send, forward, change this message (groove), to improve running efficiency of system.
(10) supervision and management center
Supervision and management center is used for the message to described messaging bus inside, carries out centralized watch, analysis, control, filtration and management.Such as: the actual run time of all operators in statistical message bus; Remove certain the class message issuing certain operator; Even forced termination runs certain operator out of control ... etc.Generally be mainly used in system debug and test phase, can exist during system commencement of commercial operation.
(11) space reclamation module
Space reclamation module goes out the message after team itself and described message-slot, that is: for the discarded recovery of message itself and the discarded recovery in message-slot space for reclaiming in described messaging bus.Discarding of message itself, the team that goes out entered in single exit pattern belonging to parallel device of joining the team operates more.In very simple application system, unifiedly when head of the queue is selected can carry out, so that time operator runs, discarded mark can be eliminated very simply, reuse this message.The recovery in message-slot space: under normal circumstances, belongs to the space reclamation operation entered in single exit pattern of parallel ring distributor more, also can by hardware implementing.
(12) battery saving arrangement
Concrete implementation and the application system hardware of battery saving arrangement are closely related.Because whether this messaging bus can exist message according to inside, thus immediately know system and whether be in waiting status.Therefore, when bus inside does not exist message, notice uses the application system of this messaging bus to carry out energy-saving distribution immediately.When there being message to occur, reinform hardware recovery normal operating condition.
Under a lot of application scenario (as: 8051 single-chip microcomputer), processor does not have CAS/CAS2 instruction, does not have LL/SC etc. for the senior synchronization primitives of parallel work-flow yet.Therefore, the similar primitive of method simulated implementation of switch interrupts can only be used.This can reduce the dispatching efficiency of bus.At this moment, some simple adaptability changes can be made to universal model, to adapt to concrete applied environment, improve system effectiveness.Illustrate: the message that the inner operator of bus produces is many and external interrupt environment produces message is few.At this moment, this feature can be utilized, 2 bus message spatial caches are set.It is emulative that interrupt message is joined the team, and uses switch interrupts to realize primitive.It is synergitic that operator message is joined the team, then without the need to using switch interrupts, therefore can improve dispatching efficiency.Even for interrupting preferential feature, more high efficiency technology correction be made, making the two to share same message buffering.
For hard real-time system, require some key operation, must determining to complete in interior event horizon.This general cooperation model, when priority scheduling, can change and is achieved a little.For response speed very high speed, strict situation, process function inside of can directly breaking within hardware completes.A step can be postponed a little for the response time, the situation of bus scheduling can be utilized, under the highest collaborative priority can be arranged in, run operator.Enqueue operations is also arranged in limit priority, can ensure not wait for when joining the team delayed.Meanwhile, all operators exceeded schedule time are split.To make at the appointed time, bus can be finished in time to arbitrary operator.And then, at the appointed time, the operator of limit priority can be dispatched to, complete hard real-time response.Because this model has center of the centralized monitor, be easy to the working time of monitoring each operator.Therefore, be easy to navigate to those operators run that exceeds schedule time, help through the design effort of hard real-time response.
, there is a succinct concrete special case efficiently in messaging bus provided by the invention.The function of this special case is not complete especially, but execution performance is efficient especially, can realize operator concurrent operations, meet general concurrent applied environment, or as the basis of other concurrent application.Use hardware implementing critical atoms operation time, its execution efficiency, can with other subroutine call of assembly level, possess identical or closely performance.
In this special case, parallel ring distributor unites two into one with parallel device of joining the team.Adopt sleep messages groove and message activation mechanism, realize simple FIFO and sort, while joining the team, natural completion queue operation.Specific works step is:
S1, dormancy identification, allocation of space, to join the team.Special hardware completes, and wall scroll assembly instruction can complete.
S2, external message are copied into message-slot.
S3, the simplest FIFO queue up.Lie in S1 operation, not elapsed time.
S4, message head of the queue go out team.Wall scroll assembly instruction can complete.Parameter extraction, generalized case can be omitted.
S5, operator ID table look-up, and redirect performs.The indirect call instructions of assembly level can complete.
S6, space reclamation.Special hardware completes, and wall scroll assembly instruction can complete.
Other subroutine call process of contrast assembly level, S1 is equivalent to change stack pointer, and S2 is equivalent to parameter pop down, and S5 is equivalent to indirect CALL assembly instruction, and S6 is equivalent to parameter and moves back stack.S3 is elapsed time not.Therefore, only S4 is the execution time had more, and be very shirtsleeve operation, wall scroll assembly instruction can complete.Therefore, the overall execution time, 1 assembly instruction time is only had more.When message (or parameter) is more, shared time proportion is considerably less.Therefore, execution performance closely can be accomplished.If Optimum Operation, adopts more complicated hardware, can accomplish identical execution performance further.
Below this special case is described in detail:
For describe simple for the purpose of, first arrange two terms: allow first environment, environment of trying to be the first.
Usually, in low side Embedded Application environment, generally adopt the single-chip microcomputer of core uniprocessor, do not adopt operating system.Application software adopts structuring, modularization, sequential programming technology, and assembling forms whole application system, under running directly in bare machine state.When external environment condition event occurs, utilize interrupt handling routine to try to be the first and seize master routine, catch external event, and state-event is kept at some ad-hoc location appointed in advance.Meanwhile, master routine uses a very large endless loop, and samsara has checked whether that external event occurs.If there is generation, then according to prior agreement, check the state extracting external event, export after process.
A lot of application, is similar to above-mentioned application scenarios, major cycle always seize by external interrupt, but there will not be major cycle to seize the situation of external interrupt.Also namely, as long as have external interrupt code in operation, major cycle suspends execution certainly.This software execution environment, is referred to as to allow first execution environment, referred to as " allow first environment ".Such as: during monokaryon uniprocessor, LINUX performs real-time priority schedule policies, the real-time thread running environment caused, and when the thread of its lowest priority serves as major cycle, namely forms and allows first environment.
In contrast, when polycaryon processor or monokaryon multiprocessor or the preemptive schedule of common time chip, main thread and other thread can be seized mutually, or parallel the intersection performs simultaneously.This software execution environment, is referred to as execution environment of trying to be the first, referred to as " environment of trying to be the first ".
When realizing this messaging bus, major cycle, as scheduler program, completes the function that message goes out team, scheduling, synthetic operation operator; Other external interrupt, then mutually seize, message sent into system queue.Try to be the first under environment, scheduler program and external interrupt are seized mutually, intersect and perform.Therefore, external interrupt is in filling message-slot but when filling complete not yet completely, and scheduler program just likely runs.Now, scheduler program just has an opportunity to touch the imperfect message of that semi-manufacture formula.Therefore, need to take certain measure, ensure that scheduler program can not that semi-manufacture message, as being that normal message uses.Allowing under first environment, when external interrupt fills message-slot, scheduler program has no chance to be performed.Scheduler program or cannot see new information, otherwise see be exactly one join the team after full message.Utilize this feature, allowing under first environment, just can simplify parallel algorithm of joining the team, stamp dormancy mark need not to message (groove).
The present embodiment may be used for try to be the first environment, transparent distributional environment, based on x8632bit multiple nucleus system.
The most crucial technical essential of the present embodiment is, by parallel ring distributor and parallel device union operation of joining the team, by the pointer end to end of annular space, be used as is the pointer end to end of message queue simultaneously.Same pair of pointer is end to end share in two queues.Like this, message-slot is just allocated out from linear space, enter into ring groove space while, just mean that this message-slot enters system message queue.
At this moment, under environment of trying to be the first, for preventing scheduler program from misapplying this new information groove (now not yet filling message data), need to write dormancy mark to this message-slot in advance.This dormancy mark lies in the length parameter of this message-slot.When length is 0, represent this message-slot in dormancy, not yet padding data, scheduler program should ignore it.
Message format is random length binary data, is divided into message header, message body two part.Message body can be arbitrary data, be less than the random length of 65536-8 byte.Message body is 0 byte is also legal, and at this moment, whole message does not just have message body, only comprises message header.Message header has three parts: the message-length parameter s ize of 2 bytes, 2 byte CAS2 counter cas2cnt, 4 byte operator id.Totally 8 bytes, just in time in a CAS2 opereating specification of 32BITx86CPU.
Under environment of trying to be the first, utilize without lock algorithm, prewriting dormancy mark needs to use CAS2 operation.For preventing CAS2 without ABA problem during latching operation, cas2cnt counter is necessary.Concrete principle referring to correlative theses, can repeat no more herein.Allowing under first environment, do not needing to use dormancy mark, also do not need to use CAS2 operation, therefore cas2cnt need not exist, and can give up.
In present case, the assembly instruction cmpxchg of CAS operation x86 completes, and once can operate 4 bytes; CAS2 operation assembly instruction cmpxchg8b completes, and once can operate 8 bytes.Under x86 framework, complete rambus locking with assembly instruction lock, to complete CAS/CAS2 operation during multinuclear.
No. ID, the operator of 32BIT can be divided into 2 parts in pole simply: node number, operator number.When node number is 0, operator number subsequently, is counted as the operator in this bus.When node number is not 0, mean that target operator is not in this bus, but at other external node: operator number subsequently, being therefore treated as is operator in this external node.Node number and operator number respectively account for how many BIT positions, in application system, can arrange in advance.Each external node, needs a local operator on behalf of the affairs of some necessity of process, as: this message is forwarded to one and leads in the communication pipe of this external node and go ... etc.This local operator, is called and acts on behalf of operator.
The queue of ring groove space has head pointer head, tail pointer tail, doubles as the head of system message queue, tail pointer.Right overhead, tail pointer equal time, representing in annular slot space do not have message (groove), is empty queue.Do not consider the situation that ring groove space is overflowed, this kind of exception fault by user application by oneself.Therefore, tail pointer points to the clear area of linear memory block all the time.
When message-slot is distributed, directly at tail pointer place, after 8 byte boundary alignment, mark the free space of corresponding length, then move tail pointer: this also means, message-slot also enters system message queue simultaneously.When the least significant end of linear memory block distributes, its remaining free space, possibly cannot hold a complete message, then this end spaces is distributed into a discarded message-slot.New information is in next clear position (the most starting end of linear space) continuous dispensing.Because message-slot border is 8 byte-aligned all the time, equal with the length of message header.Therefore, the discarded message-slot of least significant end, can hold the head of lower message at least, is unlikely to, when with CAS2 operation, concurrent write dormancy mark, to occur the fault that super border is read and write.
Because the length of message-slot just holds a piece of news, therefore, the length of message-slot directly can be calculated by the length of message.And message-slot is continuous dispensing, therefore, the length of message-slot in fact also implies the position of next message-slot.Therefore, do not need other additional information, all message just can form the single linked list of a FIFO.From owner pointer, by order of joining the team, message in all queues can be traversed.
Message directly goes out team from queue head pointer, and then, queue head pointer head points to next message-slot: this also means, front message-slot space has gone out of use recovery, enters freely idle linear space.After message is finished using, can directly discard when not going out team.Discarded mark lies in the operator ID of head.ID is 0, and mean that this message goes out of use, scheduler program no longer pays close attention to it.ID is not 0, means that it is efficient message, needs scheduled for executing.
Like this, parallel message of joining the team, only joins the team from queue tail, the tail pointer tail of only amendment queue; And go out the message of team, only go out team from queue head, the head pointer head of only amendment queue.Therefore, do not adopt other critical resource safeguard measure, also can naturally complete like a cork concurrent emulative go out enqueue operations, promote execution efficiency.
Referring to Fig. 5, the operation of present case core the most has three:
A1, distribution dead slot are joined the team; A2, submission activate groove; A3, scheduled for executing.
Need to send the external environment condition of message or inner operator, according to message-length, call A1 operation, obtain the private message groove of dormancy.Then, the remainder of message is copied to this message-slot.Finally, according to the target operator ID of message and the length parameter of message, call A2 operation, activate this message.Wait for this message of bus scheduling process.
The bus A3 operation of present case, very simple, intuitive.Just simply process dormancy and discarded recovery problem.Wherein, act on behalf of this concept of operator, be placed in scheduler program and realize, calculating for transparent distribution has very large benefit.Like this, just in the link profile of component installaiton, directly No. ID that uses in component, can be linked in external node.And the local operator of generation of need not encoding in addition again, then by this operator, forward the message to external node.
During bus A3 operation, for common operator, first indicate this message discarded, then perform target operator corresponding to this message.Reason is, this can give the chance of this message of this operator one recycling.As long as this operator removes this discarded mark, just this message reusable, elevator system execution efficiency.Such as: in error handle operator, the ID of message is revised as other operator, just message preferentially can be transmitted to follow-up error handle operator fast.Because this message is now still in message queue head, therefore preferential execution can be obtained.
Bus A2 operates, and when the length parameter sz being greater than 0 is written to the size territory of sleep messages head instantaneously, this sleep messages groove is activated (during message-slot dormancy, the size territory of its head is 0).For improving execution efficiency, only when message queue has been just empty, also, when this message is the Article 1 message in message queue, just sending signal, waking the scheduler program in sleep up.Wake-up signal also can repeatedly repeat to send.
Bus A1 operation is without lock sign, distributes and enqueue operations, have employed CAS/CAS2 operation.
(1) a snapshot snap is done to tail pointer tail and pointed message-slot head thereof.At this moment in fact snap may be useless junk data, is also likely the processed good effective head of its people: may be carried out mark head or the message of filling or be simply the message header filled in completely.Then repeatedly compare in real time with tail pointer, to guarantee obtained snapshot, be obtained from up-to-date tail of the queue.Snap after success may be no longer filling or filling complete head.Because in that situation, tail pointer is inevitable to be changed by its people.
(2) write identical mark M to the internal memory that snapshot snap is corresponding: dormancy and effective message, its head size territory is 0, and id territory is not 0.Sometimes its people has tried to be the first filling, for preventing destroying same internal memory, adopts CAS2 atomic operation.During CAS2 operation, its counter cas2cnt territory (scnt), what obtain when original snap numerically adds 1, then jointly writes back with mark M.Like this, this CAS2 operation just ensure that: before writing mark M, parallel competition write time, have and only once indicate the successful write of M; After write mark M, the cas2cnt territory of head is only had to be modified.Therefore ensure that on the whole: mark M is reliably write in advance, and can not destroy other useful header information of the follow-up write of its people.
(3) revise rear of queue pointer tail, join the team to try to be the first.Because annular space needs to unroll after a whole circle, just original place may be got back to extremely small probability.Therefore, new, old message-slot pointer substantially can not be equal, there is not ABA problem.Only adopt CAS operation, just can complete the write of emulative tail pointer, complete allocation of space and enqueue operations.
More than be a specific embodiment of the collaborative concurrent type frog messaging bus that unblock is joined the team.
The collaborative concurrent type frog messaging bus that unblock provided by the invention is joined the team, effectively can overcome the weakness of existing " concurrent " actualizing technology, high efficient and reliable ground realizes " concurrent " technology, multiple programming, has the series of advantages such as pervasive, cheap, efficient, reliable, energy-conservation, multiplexing, transparent distribution, micro-kernel, inherent support target technology.Concrete, comprise following advantage:
(1) universality: can be widely used in various Computer Architecture, as: single processor system, many vector systems, massively parallel system, symmetric multiprocessing system, group system, vector machine, giant computer, embedded system etc.; Also can be widely used in various types of processors framework or various CPU, as: X86-based, RISC framework, arm processor, 8051 microprocessors, single-chip microcomputer etc.; Also can be widely used in each type operating system, all kinds of software systems, as: IBMOS/400 system, Windows system, Unix system, iOS system, vxWorks system, ucOSII system, sequential programming, structured programming, modularization programming, Database Systems etc.For these Protean hardware environments, unified concurrent technology model all can be used to realize.
(2) cheapness: existing hardware environment can be utilized directly to realize, with existing software and hardware system and technology also completely compatible.In order to obtain more advantages, also can adopt the unified hardware facility that is dirt cheap, the critical atoms operation in completing technology model.
(3) high efficiency: space efficiency is high: its core C language source code is no more than hundreds of row.Time efficiency is high: concurrent efficiency is better than existing common thread technology, can more than more than an order of magnitude; If adopt hardware facility, complete critical atoms operation after, compared with other subroutine call instruction of assembly level, concurrent efficiency can reach equal rank; Also namely, a concurrent scheduling operation can be completed within several or dozens of machine instruction cycle.Development efficiency is high: mate distinctive programming model and assembling multiplex technique, compared with programming with existing common modularization programming, objectification, development efficiency can more than more than an order of magnitude.
(4) high reliability: its core code is considerably less, very easy checkout is correct; Adopt and realize concurrent without lock or N-free diet method technology, core deadlock collapse never; Adopt collaborative concurrent technology, eliminate critical condition competition meaningless in a large number, avoid application program timing failure; Adopt component reusage programming model, reuse and prove reliable component installaiton system.
(5) energy conservation characteristic: adopt message and event driving mechanism.When not having load, system can detect immediately automatically, and enters power save mode.
(6) transparent distribution calculates feature.Only by the concurrent operator Actor in ID representative system, link up by means of only message between concurrent operator Actor, deposit in where with this operator, perform and have no to associate wherein.Therefore, natural adaptation polycaryon processor CMP (ChipMulti-Processor) structure, symmetric multiprocessor SMP (SymmetricalMulti-Processor) structure, asymmetric multiprocessor system AMP (AsymmetricalMulti-Processor) structure, non-uniform memory access NUMA (Non-UniformMemoryAccess) structure, massive parallel process MPP (MassiveParallelProcess) structure, computer cluster, Distributed Calculation ... etc. parallel and distributed environment.Easily carry out load balance, calculate the functions such as transfer, easily promote and calculate usefulness, whole world unified calculation environment can be realized technically.
(7) micro-kernel feature: core code is small, and by efficient messaging bus, realize concurrent mechanism.Operating system can on it, be waged a decisive campaign with single core system by framework completely expeditiously.
(8) support Object-oriented Technique: can hold ultra-large concurrent operator Actor component, all operators are all by high efficiency messaging bus communication, and perfection is simulated and achieved the behavior of the active objects in object technology and mechanism.
Embodiment two can run driving member composition model
The difference of the present embodiment and embodiment one is, also comprises the 0th layer of driving member in the set P of embodiment one; 1st layer of driving member Effect-based operation bus of embodiment one carries out component installaiton, obtains described 0th layer of driving member.
Wherein, the 0th layer of driving member comprises: described messaging bus, the 0th layer interface operator ID mapping table, the 0th layer of another name chained list and more than one 0th layer of operator; Described 1st layer of driving member comprises the 1st layer of virtual message bus, the 1st layer interface operator ID mapping table, the 1st layer of another name chained list and more than one 1st layer of operator;
Described 1st layer of driving member carries out component installaiton based on described messaging bus, obtains described 0th layer of driving member and is specially:
When carrying out component installaiton, described 1st layer of virtual message bus being carried out bus fusion, obtains described messaging bus; Described 1st layer interface operator ID mapping table is carried out form fusion, obtains the 0th layer interface operator ID mapping table; Described 1st layer of another name chained list is carried out form fusion, obtains the 0th layer of another name chained list; Described 1st layer of operator is merged, obtains the 0th layer of operator.
By the present embodiment, independent driving member composition model enforcement one obtained is articulated on messaging bus, namely obtains running driving member composition model.Wherein, this messaging bus can be any one entity bus in prior art, and also can be the collaborative concurrent type frog messaging bus that embodiment one is introduced, the present invention limit this.
Embodiment three component method for splitting
The present embodiment provides a kind of and carries out component method for splitting to running driving member composition model, comprises the following steps:
Preset component and split rule, when described run driving member composition model meet described component split rule time, by described component split rule split described in can run driving member composition model.
The invention provides following four kinds of components and split rule:
(1) the first component splits rule
Component splits rule: when the scheduler program of described messaging bus is performed by two or more kernel or processor, described messaging bus is split into the sub-bus of the distributed equity identical with described number of cores or described processor quantity; In described driving member composition model, each layer driving member described in each is articulated in corresponding described sub-bus respectively.
Concrete, because bus is cooperative scheduling and execution, therefore, a bus is suitable only for the scheduler program being performed bus by a processor kernel, can not be performed the scheduler program of same bus by multinuclear or multiprocessor simultaneously.In multinuclear or multicomputer system, if the message load of a bus is very large, is only performed the scheduler program of this bus by a kernel of a processor, seem unable to do what one wishes.So, can according to the quantity of kernel and processor, this bus splitting is become 2 even many roots buses, and each processor cores is responsible for operation one root bus.Like this, the automatic transfer work of load can just be completed.Owing to being all message communication between operator, certain operator specifically runs in which root bus, does not affect the data communication relation of operator on original single system bus.Due to the principle of locality of information, the communication between the operator of component inside, relative to the communication of member exterior, generally should be frequently a lot.Therefore, the principle of bus splitting, should divide in units of component.Like this, the original non-existent virtual message bus of component inside, entity turns to actual sub-bus again now.Certainly, if need bus splitting, so original in compiling link, the component information much can given up, just needs to hold the record, to guarantee to rebuild the element structure original with reproduction and information.
(2) the second component splits rule
Described component split rule for: the load of each driving member in driving member composition model described in dynamic statistics, according to the load balancing principle preset, is dynamically split into the multiple sub-bus of distributed equity by described messaging bus; In described driving member composition model, each layer driving member described in each or operator are articulated in corresponding described sub-bus respectively.
(3) the third component splits rule
Described component split rule for: the Energy Efficiency Ratio of each driving member in driving member composition model described in dynamic statistics, according to the energy-saving principle preset, is dynamically split into the multiple sub-bus of distributed equity by described messaging bus; In described driving member composition model, each layer driving member described in each or operator are articulated in corresponding described sub-bus respectively.
(4) the 4th kinds of components split rule
Described component split rule for: the crash rate of each driving member in driving member composition model described in dynamic statistics, according to the reliability principle preset, is dynamically split into the multiple sub-bus of distributed equity by described messaging bus; In described driving member composition model, each layer driving member described in each or operator are articulated in corresponding described sub-bus respectively.
Independent driving member provided by the invention and can run driving member composition model and component method for splitting, carries out component installaiton by multiple little driving member, finally obtains the large driving member that the component agreement of driving member little of each is identical.Large driving member completely eliminates and calls dependence to the little driving member of subordinate, makes the loose contact only had between component in data.Embody rule environment can be departed from, complete independently member function.Make component can succinctly, expeditiously multiplexing, reconstruct, combination, make whole construction system possess height reusability.
The above is only the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications also should look protection scope of the present invention.

Claims (20)

1. an independent driving member composition model, is characterized in that, described independent driving member composition model is set P={ the 1st layer of driving member, the 2nd layer of driving member subclass ... n-th layer driving member subclass }, wherein, n >=2; Each driving member in described n-th layer driving member subclass carries out component installaiton based on n-th layer virtual message bus, obtains the single driving member in (n-1)th layer of driving member subclass; Each driving member in described (n-1)th layer of driving member subclass carries out component installaiton based on (n-1)th layer of virtual message bus, obtains the single driving member in the n-th-2 layers driving member subclass; By that analogy, until each driving member in described 2nd layer of driving member subclass carries out component installaiton based on the 2nd layer of virtual message bus, the 1st layer of driving member is obtained;
Wherein, each layer driving member described in each in described set P meets identical agreement.
2. independent driving member composition model according to claim 1, it is characterized in that, in described set P, the 1st layer of driving member comprises respectively to each driving member in n-th layer driving member subclass: described virtual message bus, interface operator ID mapping table, another name chained list and more than one operator; Wherein, described interface operator ID mapping table is used for the corresponding relation of memory interface operator ID and entrance function; The corresponding relation of operator ID and described interface operator ID quoted by described another name chained list for storing; Wherein, described interface operator ID is the operator mark of described driving member self; Described quote operator ID be the driving member inside be articulated in virtual message bus operator mark.
3. independent driving member composition model according to claim 2, it is characterized in that, each driving member in described n-th layer driving member subclass carries out component installaiton based on n-th layer virtual message bus, and the single driving member obtained in (n-1)th layer of driving member subclass is specially:
Each driving member in described n-th layer driving member subclass comprises n-th layer virtual message bus, n-th layer interface operator ID mapping table, n-th layer another name chained list and more than one n-th layer operator respectively; Single driving member in the (n-1)th layer of driving member subclass obtained after carrying out component installaiton comprises (n-1)th layer of virtual message bus, the (n-1)th layer interface operator ID mapping table, (n-1)th layer of another name chained list and more than one (n-1)th layer of operator;
When carrying out component installaiton, n-th layer virtual message bus described in each being carried out bus fusion, obtains (n-1)th layer of virtual message bus; N-th layer interface operator ID mapping table described in each is carried out form fusion, obtains the (n-1)th layer interface operator ID mapping table; N-th layer another name chained list described in each is carried out form fusion, obtains (n-1)th layer of another name chained list; N-th layer operator described in each is merged, obtains (n-1)th layer of operator; Wherein, n >=3.
4. independent driving member composition model according to claim 2, is characterized in that, each driving member in described 2nd layer of driving member subclass carries out component installaiton based on the 2nd layer of virtual message bus, obtains the 1st layer of driving member and is specially:
Each driving member in described 2nd layer of driving member subclass comprises the 2nd layer of virtual message bus, the 2nd layer interface operator ID mapping table, the 2nd layer of another name chained list and more than one 2nd layer of operator respectively; Described 1st layer of driving member comprises the 1st layer of virtual message bus, the 1st layer interface operator ID mapping table, the 1st layer of another name chained list and more than one 1st layer of operator;
When carrying out component installaiton, the 2nd layer of virtual message bus described in each being carried out bus fusion, obtains the 1st layer of virtual message bus; 2nd layer interface operator ID mapping table described in each is carried out form fusion, obtains the 1st layer interface operator ID mapping table; 2nd layer of another name chained list described in each is carried out form fusion, obtains the 1st layer of another name chained list; 2nd layer of operator described in each is merged, obtains the 1st layer of operator.
5. independent driving member composition model according to claim 2, is characterized in that, the corresponding relation quoting operator ID and described interface operator ID described in described another name chained list stores is equivalent mapping relations.
6. independent driving member composition model according to claim 1, is characterized in that, described independent driving member composition model is built-in with collaborative concurrent type frog bus interface, and described collaborative concurrent type frog bus interface is for being articulated to collaborative concurrent type frog bus.
7. independent driving member composition model according to claim 6, it is characterized in that, described collaborative concurrent type frog bus comprises: data obtaining module, parallel ring distributor, linear memory block, message packing module, parallel join the team device, message queue pool, Queue sequence manager, entry maps table and system stack;
Wherein, described data obtaining module is used for obtaining target operator ID and message-length value from the message of the pending external parallel received; Wherein, described target operator ID is the operator mark processing described message; Simultaneously for obtaining the additional management message length value of additional management message, then calculate described additional management message length value and the described message-length value that gets and, obtain message and to take up room value; Wherein, described additional management message length value >=0;
Described parallel ring distributor is the space ring distributor that unblock formula walks abreast, described message for getting according to described data obtaining module takes up room value, by annular division principle continuously linear memory block described in dynamic scribing, unblock formula obtains taking up room with message the identical empty message groove of value concurrently;
Described message packing module is used for the described empty message groove described message and described additional management message being filled into described parallel ring distributor distribution, obtains non-blank-white message-slot;
Described parallel device of joining the team to walk abreast enqueue operations for carrying out unblock formula to described empty message groove or described non-blank-white message-slot;
Described message queue pool is used for the still untreated message of having joined the team of buffer memory;
Described Queue sequence manager for selecting the appointment message that need process from described message queue pool according to preset schedule strategy, that works in coordination with described appointment message goes out team's operation;
Described entry maps table, searches described entry maps table according to described target operator ID, obtains the function entrance address corresponding with described target operator ID; According to the appointment message-slot address of described function entrance address and described appointment message, call corresponding operator and perform function, thus process out the described appointment message of team;
Described system stack is the stack space that in described collaborative concurrent type frog bus, all operators are shared; Each operator system stack space of sharing mutually cover, be eclipsed form, i.e. non-laminated formula;
Further, the operator in described collaborative concurrent type frog bus only has ready state, even if when there is not any message in described collaborative concurrent type frog bus, the operator in described collaborative concurrent type frog bus is still in ready state; Arrive message once in described collaborative concurrent type frog bus, and when the operator corresponding to this message is scheduled, the operator be scheduled for obtains processor immediately.
8. independent driving member composition model according to claim 7, is characterized in that, described message is fixed length message or elongated message.
9. independent driving member composition model according to claim 7, it is characterized in that, when the least significant end scribing empty message groove of described parallel ring distributor at described linear memory block, if the remaining free space of described linear memory block least significant end is less than described message and takes up room value, then directly give up the remaining free space of described least significant end, the remaining free space of described least significant end forms discarded groove.
10. independent driving member composition model according to claim 7, it is characterized in that, first described message and described additional management message are filled into the described empty message groove that described parallel ring distributor distributes by described message packing module, obtain non-blank-white message-slot; Then described parallel device of joining the team carries out the unblock formula enqueue operations that walks abreast to described non-blank-white message-slot and is specially:
Described parallel ring distributor is configured with the first head pointer and the first tail pointer, when needing to distribute new empty message groove, after the first tail pointer of current location, directly mark the space that the value that to take up room with described message is identical, obtain described new empty message groove, and then described first tail pointer unblock formula is walked abreast move to the afterbody of described new empty message groove;
Described parallel device of joining the team is configured with the second head pointer and the second tail pointer; Realize carrying out unblock formula to described non-blank-white message-slot by parallel mobile described second tail pointer of unblock formula to walk abreast enqueue operations;
Wherein, the first head pointer of described parallel ring distributor configuration and the first tail pointer are different from the second head pointer and second tail pointer of described parallel device configuration of joining the team.
11. independent driving member composition models according to claim 7, it is characterized in that, first described parallel device of joining the team carries out unblock formula to described empty message groove and to walk abreast enqueue operations, and then described message packing module fills described message to the described empty message groove of joining the team and described additional management message is specially again:
Described parallel ring distributor shares identical head pointer and tail pointer with described parallel device of joining the team, while described parallel ring distributor distributes empty message groove from described linear memory block, this empty message groove is also performed enqueue operations by described parallel device of joining the team; Then described message packing module fills described message and described additional management message to the described empty message groove of joining the team again.
12. independent driving member composition models according to claim 11, it is characterized in that, under environment of trying to be the first, wherein, environment of trying to be the first refers to: when polycaryon processor or monokaryon multiprocessor or the preemptive schedule of common time chip, main thread and other thread can be seized mutually, or parallel the intersection performs simultaneously, this software execution environment, is referred to as execution environment of trying to be the first, referred to as environment of trying to be the first;
Before described parallel ring distributor distributes empty message groove from described linear memory block, make described empty message groove be in dormant state in advance, wherein, the empty message groove being in dormant state is called sleep messages groove; Then described message packing module fills described message and described additional management message in described sleep messages groove, after filling completes, when described sleep messages groove is activated, namely change active state into, wherein, the message-slot being in active state is called alive message groove; Wherein, sleep messages groove is the message-slot that can not be performed to operator by described collaborative concurrent type frog bus scheduling; Alive message groove is the message-slot belonging to described collaborative concurrent type frog bus normal consistency scope.
Whether 13. independent driving member composition models according to claim 12, is characterized in that, when adopting elongated message, be the described sleep messages groove of 0 differentiation and alive message groove by the message-length parameter write in message-slot; When the message-length parameter write in described message-slot is 0, this message-slot is described sleep messages groove; When the message-length parameter write in described message-slot is not 0, this message-slot is described alive message groove.
14. independent driving member composition models according to claim 7, is characterized in that, also comprise: supervision and management center; Described supervision and management center is used for the message to described collaborative concurrent type frog bus inside, carries out centralized watch, analysis, control, filtration and management.
15. independent driving member composition models according to claim 7, is characterized in that, also comprise: space reclamation module; Described space reclamation module goes out the message after team itself and described message-slot for reclaiming in described collaborative concurrent type frog bus.
16. independent driving member composition models according to claim 7, is characterized in that, also comprise: battery saving arrangement; Described battery saving arrangement is used for: when there is not message in described collaborative concurrent type frog bus, and notice uses the application system of this collaborative concurrent type frog bus to carry out energy-saving distribution immediately.
17. 1 kinds, based on the run driving member composition model of driving member composition model independent described in any one of claim 1-16, is characterized in that, also comprise the 0th layer of driving member in described set P; Described 1st layer of driving member Effect-based operation bus carries out component installaiton, obtains described 0th layer of driving member.
18. according to claim 17ly run driving member composition model, and it is characterized in that, described 0th layer of driving member comprises: described messaging bus, the 0th layer interface operator ID mapping table, the 0th layer of another name chained list and more than one 0th layer of operator; Described 1st layer of driving member comprises the 1st layer of virtual message bus, the 1st layer interface operator ID mapping table, the 1st layer of another name chained list and more than one 1st layer of operator;
Described 1st layer of driving member carries out component installaiton based on described messaging bus, obtains described 0th layer of driving member and is specially:
When carrying out component installaiton, described 1st layer of virtual message bus being carried out bus fusion, obtains described messaging bus; Described 1st layer interface operator ID mapping table is carried out form fusion, obtains the 0th layer interface operator ID mapping table; Described 1st layer of another name chained list is carried out form fusion, obtains the 0th layer of another name chained list; Described 1st layer of operator is merged, obtains the 0th layer of operator.
19. 1 kinds are carried out component method for splitting to the run driving member composition model described in any one of claim 17-18, it is characterized in that, comprise the following steps:
Preset component and split rule, when described run driving member composition model meet described component split rule time, by described component split rule split described in can run driving member composition model.
20. component method for splitting according to claim 19, it is characterized in that, described component splits rule: when the scheduler program of described messaging bus is performed by two or more kernel or processor, described messaging bus is split into the sub-bus of the distributed equity identical with described number of cores or described processor quantity; The described driving member described in each of each layer in driving member composition model that run is articulated in corresponding described sub-bus respectively; Or
Described component split rule for: the load that can run each driving member in driving member composition model described in dynamic statistics, according to the load balancing principle preset, is dynamically split into the multiple sub-bus of distributed equity by described messaging bus; Describedly run the driving member described in each of each layer in driving member composition model or operator is articulated in corresponding described sub-bus respectively; Or
Described component split rule for: the Energy Efficiency Ratio can running in driving member composition model each initiatively structure described in dynamic statistics, according to the energy-saving principle preset, is dynamically split into the multiple sub-bus of distributed equity by described messaging bus; Describedly run the driving member described in each of each layer in driving member composition model or operator is articulated in corresponding described sub-bus respectively; Or
Described component split rule for: the crash rate can running each driving member in driving member composition model described in dynamic statistics, according to the reliability principle preset, is dynamically split into the multiple sub-bus of distributed equity by described messaging bus; Describedly run the driving member described in each of each layer in driving member composition model or operator is articulated in corresponding described sub-bus respectively.
CN201310020477.1A 2013-01-18 2013-01-18 Independent driving member and driving member composition model and component method for splitting can be run Active CN103473032B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201310020477.1A CN103473032B (en) 2013-01-18 2013-01-18 Independent driving member and driving member composition model and component method for splitting can be run
PCT/CN2013/001370 WO2014110701A1 (en) 2013-01-18 2013-11-11 Independent active member and functional active member assembly module and member disassembly method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310020477.1A CN103473032B (en) 2013-01-18 2013-01-18 Independent driving member and driving member composition model and component method for splitting can be run

Publications (2)

Publication Number Publication Date
CN103473032A CN103473032A (en) 2013-12-25
CN103473032B true CN103473032B (en) 2016-01-27

Family

ID=49797909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310020477.1A Active CN103473032B (en) 2013-01-18 2013-01-18 Independent driving member and driving member composition model and component method for splitting can be run

Country Status (2)

Country Link
CN (1) CN103473032B (en)
WO (1) WO2014110701A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106598801A (en) * 2015-10-15 2017-04-26 中兴通讯股份有限公司 Coroutine monitoring method and apparatus
CN113553100B (en) * 2021-06-29 2023-03-14 袁敬 End-to-end self-organized intelligent computing framework and application method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102523104A (en) * 2011-11-30 2012-06-27 中国电子科技集团公司第二十八研究所 Networked simulation operation supporting system and method
CN102567267A (en) * 2010-12-31 2012-07-11 北京大唐高鸿数据网络技术有限公司 Method for expanding time division multiplexing (TDM) bus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7210145B2 (en) * 2001-10-15 2007-04-24 Edss, Inc. Technology for integrated computation and communication; TICC
US7979844B2 (en) * 2008-10-14 2011-07-12 Edss, Inc. TICC-paradigm to build formally verified parallel software for multi-core chips

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567267A (en) * 2010-12-31 2012-07-11 北京大唐高鸿数据网络技术有限公司 Method for expanding time division multiplexing (TDM) bus
CN102523104A (en) * 2011-11-30 2012-06-27 中国电子科技集团公司第二十八研究所 Networked simulation operation supporting system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于构件演算的主动构件精化方法;陈鑫等;《软件学报》;20080531;第19卷(第5期);第1134-1148页 *

Also Published As

Publication number Publication date
WO2014110701A1 (en) 2014-07-24
CN103473032A (en) 2013-12-25

Similar Documents

Publication Publication Date Title
CN103473031B (en) Collaborative concurrent type frog messaging bus, driving member composition model and component method for splitting
Bal et al. Distributed programming with shared data
CN103930875B (en) Software virtual machine for acceleration of transactional data processing
CN101884024B (en) Management traffic in based on the calculating of figure
Yan et al. Application of multiagent systems in project management
CN105719126B (en) system and method for scheduling Internet big data tasks based on life cycle model
CN103279390A (en) Parallel processing system for small operation optimizing
CN102831011A (en) Task scheduling method and device based on multi-core system
GB2276742A (en) Parallel computation.
Cicirelli et al. Modelling and simulation of complex manufacturing systems using statechart-based actors
Otte et al. Efficient and deterministic application deployment in component-based enterprise distributed real-time and embedded systems
Erb et al. Chronograph: A distributed processing platform for online and batch computations on event-sourced graphs
CN102780583A (en) Method for service description, service combination and service quality assessment of Internet of Things
CN103473032B (en) Independent driving member and driving member composition model and component method for splitting can be run
Schwan et al. “Topologies”—distributed objects on multicomputers
Jafer et al. Conservative DEVS: a novel protocol for parallel conservative simulation of DEVS and cell-DEVS models
Peng et al. Graph-based methods for the analysis of large-scale multiagent systems
Newton et al. Intel concurrent collections for haskell
Louati et al. RTO-RTDB: A real-time object-oriented database model
Bellavista et al. Quality-of-service in data center stream processing for smart city applications
Biörnstad A workflow approach to stream processing
Aronson et al. An hla compliant agent-based fast-time simulation architecture for analysis of civil aviation concepts
Mordinyi Managing complex and dynamic software systems with space-based computing
Calha A holistic approach towards flexible distributed systems
Schoen The CAOS system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20160907

Address after: Song Ling Zhen Street in Wujiang District of Suzhou City, the 215201 mountains in Jiangsu province science and Technology Building No. 500 white

Patentee after: Suzhou trust ant Software Co., Ltd.

Address before: 213001 Jiangsu province Changzhou Guangcheng Road District three room 603 a unit

Patentee before: Long Jian

C56 Change in the name or address of the patentee
CP02 Change in the address of a patent holder

Address after: Xiu Jie Wujiang Song Ling Zhen District of Suzhou city in Jiangsu province 215201 softcastle Technology Building No. 500

Patentee after: Suzhou trust ant Software Co., Ltd.

Address before: Song Ling Zhen Street in Wujiang District of Suzhou City, the 215201 mountains in Jiangsu province science and Technology Building No. 500 white

Patentee before: Suzhou trust ant Software Co., Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200319

Address after: Room 303, No.1, paddy Internet Industrial Park, No.399, Xiarong street, Wujiang District, Suzhou City, Jiangsu Province 215200

Patentee after: Suzhou Abbe Intelligent Technology Co., Ltd

Address before: Baichuang technology building, No. 500, Shuixiu street, Songling Town, Wujiang District, Suzhou

Patentee before: Suzhou trust ant Software Co., Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220107

Address after: 215000 No. 1801, pangjin Road, Jiangling street, Wujiang District, Suzhou City, Jiangsu Province

Patentee after: Suzhou shenku robot Co.,Ltd.

Address before: 215200 Room 303, 1, paddy Internet Industrial Park, 399 Xiarong street, Wujiang District, Suzhou City, Jiangsu Province

Patentee before: Suzhou Abbe Intelligent Technology Co.,Ltd.