AU2008281324B2 - A method and system for reactively assigning computational threads of control between processors - Google Patents

A method and system for reactively assigning computational threads of control between processors Download PDF

Info

Publication number
AU2008281324B2
AU2008281324B2 AU2008281324A AU2008281324A AU2008281324B2 AU 2008281324 B2 AU2008281324 B2 AU 2008281324B2 AU 2008281324 A AU2008281324 A AU 2008281324A AU 2008281324 A AU2008281324 A AU 2008281324A AU 2008281324 B2 AU2008281324 B2 AU 2008281324B2
Authority
AU
Australia
Prior art keywords
entities
behavior
data
container
entity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2008281324A
Other versions
AU2008281324A1 (en
Inventor
Martin Gregory Graham
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Clear Falls Pty Ltd
Original Assignee
Clear Falls Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to AU2007904048A priority Critical patent/AU2007904048A0/en
Priority to AU2007904048 priority
Priority to AU2007101028 priority
Priority to AU2007101028 priority
Application filed by Clear Falls Pty Ltd filed Critical Clear Falls Pty Ltd
Priority to PCT/AU2008/001104 priority patent/WO2009015432A1/en
Priority to AU2008281324A priority patent/AU2008281324B2/en
Publication of AU2008281324A1 publication Critical patent/AU2008281324A1/en
Application granted granted Critical
Publication of AU2008281324B2 publication Critical patent/AU2008281324B2/en
Application status is Ceased legal-status Critical
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing

Description

PCT/AU2008/001104 Received 14 September 2009 - AMENDED' A Method and System for Reactively Assigning Computational Threads of Control between Processors BACKGROUND OF THE INVENTION 5 The present invention relates to a method and system for reactively assigning computational threads of control between processors. The prior art is derived from known implementations of event-driven and data 10 driven coordination models, these can provide either shared state concurrency or message-passing concurrency, some broad examples are generative communications and actors. At a high-level coordination is defined as the methods- and tools that -allow several computational entities to cooperate towards a common goal. In the context of this invention 15 coordination refers to the ongoing dynamic and. concurrent assignment of computational threads of control between processors, where common goals may include increased utilization of available processing time. A coordination model provides a -framework to organize this cooperation, by defining three elements: a) the coordination entities whose cooperation is being organized, 20 eg: processes, threads, various forms of "agents"; b) the coordination media through which the entities communicate, eg: messages, shared variables; c) the coordination rules, which define the interaction primitives and patterns used by the cooperating entities to achieve coordination. In this context. a coordination model can be thought of as the gloe that binds computational 25 activities into an ensemble. Relevant categories and implementations of prior art are: 1. Event-driven Models, JEDI, ELVIN and SEDA, 2. Data-driven Models, Linda, LIMBO, LIME, Sun Microsystems JavaSpaces,. IBM T-Spaces, ObjectSpaces, Objective Linda and 30 GigaSpaces Amended Sheet PCT/AU2008/001104 Received 14 September 2009 2 AMENDED Herein Applicant draws upon the definition of terms of software level multi threading in distributed systems, in particular those for 'work sharing' and 'work stealing' and their algorithmic implementations summarized in T. A. Marsi ("A Study of Software Multi-Threading in Distributed Systems" T. A. 5 Marsi, Yaoqing, Francis C.M. Lau, Technical Report TR 95-23, 20 November 1995), as well as the coordination and composition of systems through "Generative Communications". Generative communications is one of a number of alternatives to the traditional message queue:based concurrency model. t0 Software level multi-threading as outlined by T. A. Marsi, relates to software level threading mechanisms in distributed systems. In summary the paper defines relevant terms and the concept of process and thread control that are defined within the software level context of the paper. In addition two generic 15 thread scheduling strategies are described including 1) work sharing, and 2) work stealing strategies, as well as some examples of. each. Finally, a number of programming paradigms are discussed which include message -passing, active messages and data-driven paradigms. The problem with.this disclosure is that scheduling rules are defined prior to run-time, and the rules 20 are therefore based on current operating conditions which may change significantly at run-time. Generative communications as defined by Gelernter 1980 [G. Gelernter, "Generative Communication in Linda", ACM Transactions on Programming 25 Languages and Systems 7(1), 80-112 (1985)] refers to interacting computational entities that 'do not exchange messages directly', but through coordinating media which is a shared associative memory, wherein data exchanges through this memory are performed based upon 'read', 'write' and 'take' semantics, which are otherwise known by the terms of 'out', 'in' and 30 'read' in the literature. Many implementations of coordination models using generative communications exist, some of which are noted above as do Amended Sheet PCT/AU2008/001104 Received 14 September 2009 2a AMENDED models based upon the actor paradigm. In generative communications coordinating entities can concurrently insert (or generate) data, into the shared memory, whilst others can withdraw data from 5 the shared memory. The process of inserting data is referred to as a 'write' operation. The action of withdrawing data, referred to as a 'read' or a 'take' operation, uses associative matching of the data and the shared associative memory. The 'read' and 'take' operations are defined as being non deterministic in that the identity data return is not determinable prior to the 10 operation completing. The operations of 'read' and 'take' differ in that a 'read' makes a copy of the data resident in memory, whilst the 'take' makes a copy of the data and removes original data from memory. Interaction through generative communication inherently uncouples communicating entities. 15 The advantage of generative communications being that a writer/sender of data does not directly contact another coordinating entity, and a reader (or taker) only contacts the shared memory when it actually requires the data, and therefore does not have to strongly couple to other coordinating entities. Due. to temporal decoupling the'reader (or taker) does not have to exist at all 20 during the time of generation. This means that sender and receiver can be uncoupled both spatially and temporally, which is in contrast to most distributed languages which are only partially uncoupled in space, and not at all in time. This leads to the -major advantage of generative communication: coordinating entities are able to communicate although they are 'anonymous' 25 to each other. The two key characteristics of 1) uncoupled and 2) anonymous communication style directly contribute to the design .of parallel and distributed applications: uncoupled communication allows abstracting from the details (such as identification and. interface) of the entities that are interacting. 30 Objective Linda (Holvoet. T. "Towards Generative Software Composition", in Amended Sheet PCT/AU2008/001104 Received 14 September 2009 2b AMENDED proceedings of the Thirty-first Annual Hawaii International Conference on System Sciences; 1998) is an example of a coordination model based upon generative communications which adds a. dynamic composition language to the basic Objective Linda semantics, derived from generative 5 communications detailed above. Besides adapting the Linda model to'object orientation, Objective Linda also provides an improved set of operations on the object spaces. The paper outlines the semantics of Objective Linda, and coins the term, "generative software composition", which is aimed at describing a' composition model in which generative features serve as the 10 -basis for the purposes of 'dynamically assembling and re-assembling configurations'. This is described as "components being 'static abstractions with plugs' [that) identify agents as static abstractions and the object spaces attached to them as plugs". Configurations are defined as being modeled with Objective Linda and consist of a number of: 1) agents, 2) objects spaces, and 15 3) objects stored in the object spaces. The paper also discloses four different aspects of dynamic composition: 1) creation and annihilation, 2) creation and deletion of object spaces, 3) exposure and hiding of object spaces, and finally 4) attaching to and detaching from object spaces. These aspects combined provide the underlying mechanism for dynamic reconfiguration of respective 20 components of the system which is the basis of the paper. The method for the design and, analysis of a system based upon this method is derived from a Petri-Net formalism, in particular a Colored Petri-Net formalism which is enhanced to support timeout annotations. This method of design and analysis, as the paper discloses, allows for, 'dynamic' composition'which is 25 employed at design-time. This method assigns agent entities, and their temporal and spatial execution at design-time and is therefore an a-priori based coordination mechanism for composition, scheduling and execution. In addition, the agent granularity and aggregate size of composed agents is determined at design-time. 30 Therefore with regard to coordination and dynamic composition the method Amended Sheet PCT/AU2008/001104 Received 14 September 2009 3 AMENDED clearly relates to a design-time method where object-oriented methodologies are applied to a generative communications coupling to build distributed computing systems which can be reconfigured. 5 The prior art presents a number of unresolved issues particularly in the area of reactively assigning computational threads of control between processors: 1. First that coordination rules are primarily a-priori based rules. That is rules for coordination of entity computational activities and interactions are specified within the model prior to run-time. 10 2. Second that software based on these coordination models is unable to leverage full capabilities at. run-time of processors as the a-priori construction of coordination rules limits the ability of making accurate predictions of necessary structure and behavior at run-time of applications. 15 3. Third that coordination- models cannot make accurate adaptive changes to activities and interactions due to insufficient a-priori- information of possible future operating conditions. 4. Fourth that the coordination - rules are not appropriately reactive to operating conditions, that is they.are based upon a-priori knowledge of the 20 system when operating. 5. Fifth that the coordinating entities are static and inflexible to reactive change with respect to operating conditions. 6. Sixth that the coordinating media are static and centralized and are inflexible to reactive change to operating conditions. 251 SUMMARY OF THE INVENTION . The present invention concerns a method and system for reactively assigning computational threads of control between processors. In particular this Amended Sheet WO 2009/015432 PCT/AU2008/001104 4 invention provides a coordination model implemented by a software framework. The coordination model comprises five (5) entities which implement the three previously defined elements of a coordination model: 5 1) Behavior entities are the computational threads of control. 2) Data entities are the result of the execution of a Behavior, that is are input and/or output to associated Behavior execution. 3) Container entities are part of the coordinating media, and: retrieve, create, execute and store Behavior entities and Data entities that are 10 contained within. In general a Container is associated with each physical Processor, however Containers are also able to control the instantiation of additional Containers per processor based upon, for example, the blocking status of executing Behaviors. 4) Source entities are part of the coordinating media, and allow for storage 15 and retrieval of Data and Behavior entities and as such facilitate communication of: Behavior to Behavior, Behavior to Container and Container to Container through the semantics of generative communications. A Source entity can, for example, be shared associative memory for storage and retrieval of Data and Behavior entities. 20 5) Processor comprises the physical entity that executes the instructions, some examples are: single computer chip CPU, embedded software core processors, Hyper Threading, homogeneous and heterogeneous Multi-Core processors, Mobile Devices and Embedded Devices. 25 This invention facilitates decomposition of an application into a cooperative collection of distributed and networked Behaviors, which are subsequently executed by Containers. A designer using this invention implements a Behavior for each logical stage of execution, which represents the core service-processing logic for that stage. 30 WO 2009/015432 PCT/AU2008/001104 5 As defined, Behavior entities and Data entities are combined and executed by Container entitiess. In general a Container performs the following processes: 1) Data-Retrieval: retrieves Data entities; 2) Data-Behavior-Mapping: maps respective Behavior and Data 5 combinations. A Container entity performs this mapping operation through a method referred to here as a 'Data-Behavior-Mapping'. An example of this mapping occurs in object-oriented terms, where objects have type and objects can be mapped to other objects with respective types and sub-types; 10 3) Behavior-Retrieval: retrieves the associated Behavior entity from a Source entity; 4) Execution: a Container entity loads and executes the associated Behavior entity. A Container entity passes the Data entity to the executing Behavior entity for processing; 15 5) Finalization: A result Data entity is returned to the Container entity, and 6) Aggregation: A returned result Data entity is returned to the Container entity for subsequent processing by the Container entity at different points of the process. 20 This invention provides a mechanism for structuring complex and potentially fragile applications. In this manner rather than exposing a typed function-call API, Containers leverage this by pulling events of certain types and emitting events of certain types; there need not be a one-to-one matching of event 25 reception and emission, nor any coupling or synchronization both spatially or temporally. The following scenarios represent some examples of implementations of this present invention. 1. Distributed / Networked Processors implement a simple or complex 30 Data-Behavior-Container-Source based application.

PCT/AU2008/001104 Received 14 September 2009 5a AMENDED 2. Homogeneous I Heterogeneous Multi-Core Processors implement a simple or complex Data-Behavior-Container-Source based application. 3. Heterogeneous Multi-Core Processors implement a simple or complex 5 Data-Behavior-Container-Source based application. 4. Microprocessor Architecture which is implemented using Container entities. In short the Container architecture facilitates decomposition of an application at run-time. into a cooperative collection of distributed and 10 networked computational behaviors that communicate through generative communications. In this case the concurrent system bus communication is facilitated through generative communication, which is implemented through instructions added to the underlying instruction set, implemented by the microprocessor. This could be considered a multi-thread processor. 15 With reference. to the microprocessor architecture implementation, Data (instructions), exceptions and clock 1/O are written to -the system bus upon input, following which Multiple Containers with requisite Behavior decodes the input data, and dynamically configures necessary Behavior in order to 20 process or produce data. The present invention relates to a method and system for reactively coordinating heterogeneous computational work units between work unit processors and may be summarized by the following characteristics: 25 - 1) A hybrid manifestation of .both 'work-sharing' and 'work-stealing' coordination strategies. 2) Scheduling information is not a-priori defined, but is resultant from current operating conditions. Amended Sheet PCT/AU2008/001104 Received 14 September 2009 6 AMENDED 3) Scheduling rules are not a-priori defined, but are resultant from current operating' conditions which adapt and compose generic scheduling rules at run-time. 4) Granularity of computational work unit size is not a-priori defined, but 5 is dynamic and resultant from current operating conditions. 5) 'Dynamically composing heterogeneous computational work units to form dynamically reconfigurable computing systems *at run-time, and 6) The method and system can be manifested in both centralized' and decentralized forms. 10 BRIEF DESCRIPTION OF THE DRAWINGS Figure 1: A legend for the representation of respective elements depicted in subsequent figures. 15. Figure 2: A simple example-for a single processor computer, single Container where there exists an input Data (D1) which in this case is a 'string', the first Behavior (B1) capitalizes each letter in the string, then outputs the resultant string as D2, subsequently the second Behavior reverses the string. Amended Sheet WO 2009/015432 PCT/AU2008/001104 7 Figure 3: A simple example on two processor computer and two Containers where there exists an input Data (D1) which in this case is a 'string', the first Behavior (B1) capitalizes each letter in the string, then outputs the resultant 5 string as D2, subsequently the second Behavior reverses the string. Figure 4: Illustrates the structure of an example Behavior-based service. The service consists of a number of stages for handling network I/O, parsing HTTP requests. Several stages, such as the network and file 1/O interfaces, 10 provide generic functionality that could be used by a range of services. Figure 5: Illustrates the structure of an example Behavior-based service. The service consists of a number of stages for handling network 1/O, parsing HTTP requests, and so forth. Several stages, such as the network and file I/O 15 interfaces, provide generic functionality that could be used by a range of services. Figure 6: Illustrates the process flow for Behavior and Data execution by a Container. 20 Figure 7: Illustrates one or more Processors connected via a LAN or WAN network, wherein each Processor may execute one or more Containers as per method. 25 Figure 8: Illustrates one or more multi-core Processors, wherein the individual cores of the Processor may execute one or more Containers as per method. Figure 9: Illustrates one or more multi-core Processor, wherein the individual cores of the Processor(s) may execute one or more Containers as per 30 method.

WO 2009/015432 PCT/AU2008/001104 8 Figure 10: Illustrates a dynamic reconfigurable pipelined multiprocessor System SPECIFIC DESCRIPTION OF PREFERRED EMBODIMENTS 5 The present invention concerns a method and system for reactively assigning computational threads of control between processors. In particular this invention provides a coordination model implemented by a software framework. The coordination model comprises five (5) entities which 10 implement the three previously defined elements of a coordination model: 1) Behavior entities are the computational threads of control, which are executed by a Container entity; 2) Data entities are the result of the execution of a Behavior, that is are input 15 and/or output to associated Behavior execution; 3) Container entities are part of the coordinating media, and: retrieve, create, execute and store Behavior and Data that are contained within. In general a Container is associated with each physical Processor, however Containers are also able to dynamically control the instantiation of additional 20 Containers per processor based upon for example the blocking status of executing Behaviors. 4) Source entities are part of the coordinating media, and allow for storage and retrieval of Data and Behavior entities and as such facilitate communications: Behavior to Behavior, Behavior to Container and Container 25 to Container through the semantics of generative communications. A Source entity can be dynamically connected to by Container(s) and Behavior(s). A Source entity can, for example, be shared associative memory for storage and retrieval of Data and Behavior entities. In addition conceptual a Source entity can be view as a connection between Behavior(s) and Container(s). 30 5) Processor: comprises the physical entity that executes the instructions, some examples are: single computer chip CPU, embedded software core WO 2009/015432 PCT/AU2008/001104 9 processors, Hyper Threading, homogeneous and heterogeneous Multi-Core processors, Mobile Devices and Embedded Devices. Figure 1 provides a legend for all subsequent representations and 5 discussions of the described elements. The combinations of Behavior and Data entities are the basic units of execution in this framework akin to a piece of code that executes in a computer program. Each executing Behavior forms the individual and 10 concurrent flow of control of the overall application however, unlike the prior art, each Behavior is reactively assigned amongst the Containers which is determined by: 1) current load and 2) resource conditions. This mechanism is managed by the Containers and leverages non-deterministic generative communications. 15 Containers and Behaviors can communicate external to the system using known communication transports such as for example TCPIP. As such Containers, Behaviors and Data entities are addressable. Internally, communication and interaction is through the use of 'Source' entities. Source 20 entities allow for communication which is based upon generative communications. Such communication is derived but not limited by the semantics of 'read', 'write' and 'take' as discussed previously. Source entities are also addressable. 25 It is highlighted that Behavior and Data are separate entities, which are managed by the Container. However Behavior and Data are related entities. As such for every Data entity there will exist zero or more Behavior entities. Software applications are constructed from the combination of Behaviors 30 associated by their input and output Data as well as respective Source entities. The relationship of Behavior, Data and Source entities form a flow WO 2009/015432 PCT/AU2008/001104 10 graph, whose nodes are Behavior(s) and edges are Data and Source entities. Behavior and Data entities may be associated both in short-term and long term manner. Short-term is defined here as being that there are no blocking 5 operations or looping instructions that occur to allow termination of the Behavior execution. Long-term is defined here as being that no operations occur which do not allow termination. Applications flow-graph of Behavior(s), Data and Source(s) are decomposed 10 into a cooperative collection of distributed and networked Behaviors and Sources, which are subsequently executed by Containers. A designer using this invention implements a Behavior(s) for each logical stage of execution, which represents the core service-processing logic for that stage. 15 Behavior(s) and Source(s) can be changed at run-time providing for dynamic reconfiguration of an application. In addition Containers can substitute alternate Behaviors during run-time based upon localized or remote algorithms. 20 Figure 2 represents a simple example on a single Processor, single Container entity, where the dotted line represents the single processor and grey area represents the execution of the Behavior within a Container, and Data being processed. In this example input Data (D1) which in this case is a 'string', the first Behavior (B1) capitalizes each letter in the string, and then outputs the 25 resultant string as D2, subsequently the Behavior (B2) reverses the string, producing Data D3. In this respect an application design is based upon separate Behavior and Data entities is referred to here as a control flow, wherein the granularity of 30 the control flow is represented by the staged Behaviors in the flow.

WO 2009/015432 PCT/AU2008/001104 11 Figure 3 represents an extension to the simple example of Figure 2, in this case there are represented two Processors which can be available for processing, and two Containers one for each processor. 5 Fig 3 represents data and task parallelism inherent in the execution of the Behavior entities and Data entities. Behaviors as defined here are the entities to be coordinated. In the context of this invention Behaviors are the computational threads of control which are to 10 be assigned reactively to respective processors, this assignment being represented in the Fig 3. A further decomposition into respective Behavior entities and Data entities is demonstrated with respect to a Web Server application. This decomposition 15 could equally be applied to any server architecture in general. A high-level representation of the process flow is shown in the Fig 4. Fig 5 represents a partial implementation of the Figure 4 process flow in terms of the Behavior and Data decomposition upon a four processor 20 computer, with four Containers, one for each processor. In Fig 5, D1 represents an incoming socket communication connection, which is processed by B1, an 'accept connection' Behavior; D2 represents a generic socket packet to be read such as a HTTP request, which is processed by B2, 25 a 'read packet' Behavior; D3 represents an SSLITLS processing request, which is processed by a B3 'SSL/TLS request' Behavior; D4 represents a request packet to be processed, which is processed by B4 'parse packet' Behavior; D5 represents a HTTP packet header, which is processed by a B5 'url dispatch' Behavior; D6 represents 'HTTP url dispatch request', which is 30 processed by a B6 'dynamic gen conditional' Behavior; D7 represents a 'dynamic page generation request', which is processed by B7 'dynamic page WO 2009/015432 PCT/AU2008/001104 12 generation' Behavior; D8 represents a 'static page request', which is processed by a B8 'file I/O' Behavior which retrieves file from disk storage; and D9 represents a "send response request", which is processed by B9 'send response' Behavior. It is highlighted in Fig 5 that there are vertical 5 dashed lines which indicate the time-slicing of events across Processors and Containers. This is significant in respect to this example giving representation of the inherent data and task parallelism that is characteristic of this invention. As stated, Behavior and Data are separate entities that can be temporally and 10 spatial uncoupled. Behavior and Data are related to each other and their execution environment through the use of a Container and the respective Processor. The following process describes how these systems are composed to form software applications based upon these elements. 15 In order to build a software application based upon these entities it is necessary to compose respective Behavior entities and Data entities. Containers are not necessarily a part of the design process. This process is referred to here as composition. An application can be composed at design time or run-time. In the instance of 'design-time composition' an application 20 can be composed from respective Behavior and Data entities based upon the functionality of the respective Behavior entities and the respective input and output Data entities. This composition can take the form of a flow-graph wherein Behaviors are interconnected via Source entities and respective Data entities. In general a software tool, such as a purpose built graphical user 25 interface integrated development environment, could be used for this purpose to manipulate these elements. However the important point here is that composition occurs before execution of the particular Behaviors with Data. In the instance of 'run-time composition' the application can be composed dynamically at run-time, that is the specific mapping between Behavior 30 entities and Data entities are determined only after the application begins WO 2009/015432 PCT/AU2008/001104 13 executing. The result of which is that systems composed in this manner are dynamically reconfigurable. In addition, composition may be a combination of design-time and run-time 5 composition where whole or part of the application may be composed at design-time and other whole or parts of the application maybe composed at run-time. As defined Behaviors and Data entities are combined and executed by 10 Containers. In general a Container performs the following processes, depicted in Fig 6: 1) Data-Retrieval: retrieve Data entities, 2) Data-Behavior-Mapping: maps respective Behavior and Data 15 combinations. A Container performs this mapping operation through a method referred to here as a 'Data-Behavior-Mapping'. An example of this mapping occurs in object-oriented terms, where objects have type and objects can be mapped to other objects with respective types and sub-types. 20 3) Behavior-Retrieval: retrieves the associated Behavior entity from a Source entity, 4) Execution: a Container entity loads and executes the associated Behavior entity. A Container entity passes the Data entity to the executing Behavior entity for processing, 25 5) Finalization: A result Data entity is returned to the Container entity, 6) Aggregation: A returned result Data entity is returned to the Container entity for subsequent processing by the Container entity at different points of the process.

WO 2009/015432 PCT/AU2008/001104 14 A Container entity dynamically at run-time combines associated Behavior entities and Data entities based upon the form of Data-Behavior-Mapping implemented and subsequently executes this combination. 5 In its simplest embodiment the process flow shown in Figure 6, of combination and execution by a Container consists of but is not limited to the following: 1. Data-Retrieval: A Container entity obtains a Data entity from a 10 Source: 2. Data-Behavior-Mapping: A Container entity performs the mapping process for the Data entity, 3. Behavior-Retrieval: A Container entity retrieves the associated Behavior entity from a Source entity, 15 4. Execution: A Container entity loads and executes the associated Behavior entity. A Container entity passes the Data entity to the executing Behavior entity for processing, 5. Finalization: A result Data entity is returned to the Container entity. 20 6. Aggregation: A returned result Data entity is returned to the Container entity for subsequent processing by the Container entity. In regard to the 'Data-Retrieval' process this method employs generative 25 communications. In particular this method of Data-Retrieval relies upon the non-determinism inherent in the 'read' and 'take' semantic of generative communications, discussed in the background section, where the Container performs a 'read' or 'take' operation on a Source entity which could be for example shared associative memory. The operation queries for a generic 30 Data entity type, where the type may, for example, be an object-oriented type or XML.

WO 2009/015432 PCT/AU2008/001104 15 The connected Source entity for Data entities may possess one or more Data types. In addition, a Source entity may possess varying populations of Data entity types. These populations may also vary with time. In this regard there 5 will be a probability distribution of Data entity types which varies with time. Subsequently if a 'read' or 'take' operation of generative communications using a generic Data type is performed the frequency of obtaining respective Data entity types over time is governed by the probability distribution of the 10 Data types presented by the Source. That is there is a variable probability of extracting respective Data types over time. This is a reactive process of Data Retrieval both internally and externally, which is reactive to the conditions of current Data type and population with time. Variation of Data type and population with time for example may be a result of operating conditions such 15 as current work-load or performance of Processors. In regard to the 'Data-Behavior-Mapping' process, this method can be implemented in a form which is centralized, decentralized or both. In the case of centralized there may exist one or more data structures in memory, such 20 as a table or map, which will allow for a program to perform a search process to identify the respectively mapped Behavior entities. In the case of decentralized the Data entity has the respectively mapped Behavior type(s) embedded within itself at design-time or run-time, in which case a search process is performed by the Container, in this case inspecting the Data entity 25 for its associated typed Behavior. The Data-Behavior-Mapping process also has reactive characteristics which provides for: 1) dynamic behavior reconfiguration; 2) dynamic variation of granularity of composed Behaviors, that is the ability for Containers to aggregate Behavior(s) and their execution within a Container; and 3) dynamic substitution of alternative Behavior at run 30 time.

WO 2009/015432 PCT/AU2008/001104 16 In regard to the 'Behavior-Retrieval' process this method employs generative communications; in particular this method of Behavior-Retrieval relies upon the non-determinism inherent in the 'read' and/or 'take' semantics of generative communications, discussed in the background section. The 5 Container can perform a 'read' or 'take' operation on a Source which could be for example shared associative memory. This 'read' or 'take' operation queries for a specific Behavior entity type, where the type may be for example an object-oriented type or XML. The connected Source for Behavior entities may possess one or more Behavior types. In addition a Source may possess 10 populations of varying Behavior entity types. These populations may also vary with time. In this regard there will be a probability distribution of Behavior entity types which varies with time. Subsequently if a 'read' or 'take' operation of generative communications using a specific Behavior type is performed the frequency of obtaining respective Behavior entity types over time is governed 15 by the probability distribution of the Behavior types presented by the Source. That is there can be a variable probability of extracting respective Behavior types over time. This is a reactive process of Behavioral-Retrieval both internally and externally, which is reactive to the conditions of current Data and Behavior type and population with time. Variation of Behavior and Data 20 type and population with time for example may be a result of operating conditions such as current work-load or performance of Processors. In regard to the 'Execution' process this method loads the necessary Behavior instruction code and passes the respective Data entity to an entry 25 point of the Behavior code for subsequent execution. In regard to the 'Finalization' process this method upon termination of the currently executing Behavior and Data combination will retrieve and store the subsequent Data entity resultant. 30 WO 2009/015432 PCT/AU2008/001104 17 In regard to the 'Aggregation' process this method will either return to start, or will proceed to the Data-Behavior-Mapping step, based upon a policy which can be modified for respective circumstances. 5 There has always existed a significant barrier of complexity of design and construction to building highly distributed and dynamic systems. This invention solves this problem by allowing the design and construction of applications as per traditional methods such as found object-design methodologies. 10 At run-time this invention decomposes the application into a cooperative collection of distributed and networked computational behaviors and data that communicate through generative communications. Each executing computational behavior forms the individual and concurrent flow of control of 15 the overall application. An application is designed and constructed as a logically non-distributed application which makes design conceptualization more effective, but at run time the method decomposes and decouples the application into physically 20 distributed entities, which from the perspective of the Container appear to interact with the rest of the system by their connected Sources. An important aspect of processing in this method is that they are inherently subject to admission control. That is, a Container does not merely reject an 25 event in order to implement some resource-management policy, such as preventing response times from growing above a threshold. Containers will by virtue of their pull-based interactions through generative communications only perform the amount of processing they can handle. Therefore a request is not rejected but is naturally left for the first available Container to process it. This 30 mechanism acts as an implicit overload signal to applications and can be used by the service to adapt behavior. This mechanism removes the need WO 2009/015432 PCT/AU2008/001104 18 and complexity of a particular admission control mechanism, which itself depends greatly on the overload management policy and the application itself. 5 A network of Containers may be constructed either statically (where all stages and connections between them are known at design-time or run-time) or dynamically (allowing stages to be added and removed at run-time). Static network construction allows the designer (or an automated tool) to reason about the correctness of the flow-graph structure; for example, whether the 10 types of Data generated by one Container are actually handled by execution stages downstream from it. Static construction may also permit compile-time optimizations, such as short circuiting an event path between two execution stages, effectively combining 15 two execution stages into one and allowing code from one execution stage to be in-lined into another. Dynamic network construction affords much more flexibility in application design, permitting new execution stages to be added to the system as 20 needed. For example, if some feature of the service is rarely invoked, the corresponding execution stages may only be instantiated on demand. Containers effectively mix both static and dynamic construction. Wherein at design-time the various stages and therefore requisite Behaviors are designed and constructed in a-priori fashion. At run-time the Container 25 architecture dynamically deploys these Behaviors to respective networked Containers, based upon the load and resource conditions at that point in time. Introducing a container and generative communications between two code modules decouples their execution, providing an explicit control boundary. As 30 well the execution of a request is not constrained to a given Container, bounding its execution time and resource usage to that consumed within its WO 2009/015432 PCT/AU2008/001104 19 own execution stage. As a result, the resource consumption of each Container is controlled independently and implicitly, for example, by performing admission control on a stage's incoming Data. An un-trusted, third-party code module can be isolated within its own stage, limiting adverse 5 effects of interference with other stages in the system. This invention provides a mechanism for structuring complex and potentially fragile applications. In this manner rather than exposing a typed function-call API, Containers leverage this by pulling events of certain types and emitting 10 events of certain types; there need not be a one-to-one matching of event reception and emission, nor any coupling or synchronization spatially or temporally. Containers therefore are composed using a form of decentralized protocol, 15 rather than merely type-matching of function arguments and return values, admitting a flexible range of composition policies. For example, a Container can aggregate Behavior across multiple events over time. Containers also facilitate debugging and performance analysis of services, 20 which have traditionally been challenges in complex application and server environments. Monitoring code can be attached to the entry and exit points of each execution stage, allowing the system designer to profile the flow of events through the system and the performance of each stage. It is also straightforward to interpose proxy stages between components for tracing 25 and debugging purposes. A key goal of enabling ease of software engineering is to shield programmers from the complexity of performance tuning. In order to keep each stage within its ideal operating regime, Containers makes use of dynamic resource 30 control, automatically adapting the Behavior of each stage based on observed performance and demand. Abstractly, a Container observes WO 2009/015432 PCT/AU2008/001104 20 runtime characteristics of the stage and implicitly adjusts allocation and scheduling parameters to meet performance targets. A wide range of resource control mechanisms are possible in the Container 5 implementation. One example is tuning the number of threads executing within each stage. If all operations within a stage are non-blocking, then a stage would require no more than one thread per Processor to handle load. However, given the possibility of short blocking operations, additional threads may be needed to maintain concurrency. Likewise, allocating additional 10 threads to a stage has the effect of giving that stage higher priority than other stages, in the sense that it has more opportunities to execute. Another example is adjusting the number of Data entities aggregated per Container within each population passed to a stage's Behavior. A large 15 population size allows for increased locality and greater opportunity to amortize operations across multiple Containers, while a small population size localizes and evenly distributes work across multiple Containers in multiple stages. 20 Dynamic control in Containers allows the application to adapt to changing conditions despite the particular algorithms used by the underlying operating system. In some sense, Containers are naive about the resource management policies of the OS. For example, the Containers thread pool sizing controller is not aware of the OS thread scheduling policy, rather, it 25 influences thread allocation based on external observations of application performance. Another form of implicit resource management in Containers is overload control. Here, the goal is to prevent the service from exhibiting significantly 30 degraded performance under heavy load due to over committing resources. As a service approaches saturation, the response times exhibited by requests WO 2009/015432 PCT/AU2008/001104 21 can grow exponentially. To address this problem it is often desirable to shed load, for example, by sending explicit rejection messages to users, rather than causing all users to experience unacceptable response times. 5 Overload protection in Containers can be accomplished through the use of fine-grained and inherent admission control at each stage, as a result of generative communication primitives which can be used to simulate a wide range of policies. Generally, by having inherent admission control, the system can limit the rate at which. that stage accepts new Data entities, allowing 10 performance bottlenecks to be isolated. Containers allow the admission control policy to be tailored dynamically for each individual stage, and admission control can be disabled for any stage. A fundamental property of Container composition design is that stages are 15 prepared to deal with Data rejection. Rejection of events from a Source indicates that the corresponding stage is overloaded, and the Container uses this information to implicitly adapt. This explicit indication of overload differs from traditional service designs that treat overload as an exceptional case for which applications are given little indication or control. In this invention, 20 overload management is a primary characteristic of the run-time dynamics. Rejection of a Data entity from a Container does not imply that the user's request is rejected from the system. Rather, it is the responsibility of the stage receiving a Data rejection to perform some alternate action. This action 25 depends greatly on the Behavior logic. For example, if a static Web page request is rejected, it is usually sufficient to send an error message to the client indicating that the service is overloaded. However, if the request is for a complex operation such as executing a stock trade, it may be necessary to respond in other ways, such as by transparently re-trying the request at a 30 later time. More generally, Data rejection can be used as a signal to degrade service, by performing variants of a service that require fewer resources.

PCT/AU2008/001104 Received 14 September 2009 22. AMENDED The following scenarios represent some embodiments of the invention. 1. Distributed / Networked Processors: The embodiment of Fig .7 5 represents a distributed / network of Processors, which implement a simple or complex Data-Behevior-Container-Source based application. The Processors may include: single computer chip CPU, embedded software core processors, Hyper Threading, homogeneous and heterogeneous Multi-Core processors, Mobile Devices and Embedded Devices. In Fig 7, C1 and C2 10 represent Process6rs which are physically connected through a LAN or WAN or both. ElI E12, E21, E22 and E23'represent executing Containers as per the method and. system described. This embodiment is an example and is not limited to this number or type of Processors or executing. Containers. The. Source lines connecting executing Containers represent the interaction -and 15 communication of Data entities at run-time between executing Containers and Behaviors. This architecture has at least the following applications: 20 a The implementation of Containers allows for reactive adaptation and self: reconfiguration to optimize performance and -reduce power consumption and heat dissipation determined by: ') current load and 2) resource conditions.. This mechanism is managed by the Containers. 25 b This architecture dynamically configures inter-processor communications systems and implements complex topologies such as a mesh network. Furthermore; this architecture dynamically produces a communication topology "best fit for the current operating conditions at 30 the time". Amended Sheet PCT/AU2008/001104 Received 14 September 2009 22a AMENDED c The dynamic reconfigurable nature of Containers allows for the respective modules and their interactions to change during run-time, dependent upon current conditions and respective requirements. For instance, this architecture may allow for altogether new modules to be 5 added during -operation, which allows for reconfiguration of overall behavior of the system and allows for reconfiguration to. allow for improved performance for minimization of resources such as power and heat production. 10 d The dynamic reconfigurable nature of Containers and Behaviors and the reactive coordination foundation of this technology allows for run time self-reconfiguration of the system. e A self-reconfigurable architecture allows for design-time integration 15 and run-time reconfiguration with a high level "Integrated Development Environment". 2. Homogeneous / Heterogeneous Multi-Core Processors: The embodiment of Fig 8 represents a homogeneous / heterogeneous Multi-core 20 Processor, which implements a simple or complex Data-Behavior-Container Source based application. In Fig 8, P1, P2, P3 and P4.represent Processors which may be either homogeneous or heterogeneous in type. El 1, E21, E22, E23, E31; E32, E41 and E42 represent executing Containers as per the method and system described. This embodiment is an example and is not 25 limited to this number or type of Processors or executing Containers. The Source lines connecting executing Containers represent the interaction and communication of Data entities at run-time between executing Containers and Behaviors. 30 This architecture has at least the following applications: Amended Sheet PCT/AU2008/001104 Received 14 September 2009 22b AMENDED a The implementation of Containers allows for reactive adaptation and self reconfiguration to optimize performance and reduce power consumption and heat dissipation determined by: 1) current load and 2) resource conditions. This mechanism is managed by the 5 Containers. b This architecture dynamically configures inter-processor communications systems and implements complex topologies such as a mesh network. Furthermore, this architecture dynamically produces a 10 communication topology "best fit for the current operating conditions at the time". c The dynamic reconfigurable nature of Containers allows for the respective modules and their interactions to change during run-time, 15 dependent upon current conditions and respective requirements. For instance, this architecture may allow for altogether new modules to be added during operation, which allows for reconfiguration of overall behavior of the system or allows for reconfiguration to allow for improved performance for minimization of resources such as power 20- and heat production. d The dynamic reconfigurable nature of Containers and Behaviors and the reactive coordination foundation of this technology allows for run time self-reconfiguration of the system. 25 e A self-reconfigurable architecture allows for design-time integration and run-time reconfiguration with a high level "Integrated Development Environment". 30 3. Heterogeneous Multi-Core Processors: The embodiment of Fig 9 represents a heterogeneous Multi-core Processor, which implements a Amended Sheet PCT/AU2008/001104 Received 14 September 2009 22c AMENDED simple or. complex Data-Behavior-Container-Source based application. In Figure 9, P1, P2, P3, P4 and P5 represent Processors which are heterogeneous in type. PS may act as Master in Master-Slave interaction pattern and P1-P4 may act as Slave Processors. Alternatively Processors P1 5 P4 may act as Peers in Peer-To-Peer interaction pattern with P5 acting as a rendezvous point for the interaction. Alternatively Processors P1-P4 may act as Producers in a Producer-Consumer interaction pattern with P5 acting as a Consumer. Alternatively Processors P1-P4 may act as Consumers in a Producer-Consumer interaction pattern with P5 acting as a Producer. El 1, 10 E21, E22, E23, E31, E32; E41, E42, E42, E51 and E52 represent executing Containers as per the method and system described. This embodiment is an example and is not limited to this number or type of Processors or executing. Containers. The Source lines connecting executing Containers represent examples of interaction and communication of Data entities at run-time 15 between executing Containers and Behaviors. This architecture has at least the following applications: a The implementation of Containers allows for reactive adaptation and 20 - self reconfiguration to optimize performance and reduce power consumption and heat dissipation determined by: 1) current load and 2) resource conditions. This- mechanism is managed by the Containers. 25 b This architecture dynamically configures inter-processor communications systems and implements complex topologies such as a mesh network. Furthermore, this architecture dynamically produces a communication topology "best fit for. the current operating conditions at the time". 30 c The dynamic reconfigurable nature of Containers allows- for the Amended Sheet PCT/AU2008/001104 Received 14 September 2009 23 AMENDED d respective modules and their interactions to change during run-time, dependent upon current conditions and respective requirements. For instance, this architecture may-allow for altogether new modules to be added during operation, which allows for reconfiguration of overall 5 behavior of the system or allows for reconfiguration to allow for improved performance forr minimization of resources such as power and heat production. e The dynamic reconfigurable nature of Containers and Behaviors and 10 the reactive coordination foundation of this technology allows for run time self-reconfiguration of the system. f A self-reconfigurable architecture allows for design-time integration and run-time reconfiguration with a high-level "Integrated Development 15 Environment". The previous three architectures have at least the following applications: -1. The implementation of Containers allows for reactive adaptation 20 and self reconfiguration to optimize performance and reduce power consumption and heat dissipation determined by: 1) current load and 2) resource conditions. This mechanism is managed by the Containers. 25 2. This architecture dynamically configures inter-processor communications systems and implements complex topologies such as mesh network, this architecture dynamically produces a communication topology "best fit for the current operating conditions at the time". Amended Sheet WO 2009/015432 PCT/AU2008/001104 24 3. The dynamic reconfigurable nature of Containers allows for the respective modules and their interactions to change during run time, dependent upon current conditions and respective requirements. For instance, this architecture may allow for 5 altogether new modules to be added during operation, which allows for reconfiguration of overall behavior of the system or allows for reconfiguration to allow for improved performance for minimization of resources such as power and heat production. 10 4. The dynamic reconfigurable nature of Containers and Behaviors and the reactive coordination foundation of this technology allows for run-time self-reconfiguration of the system. 5. A self-reconfigurable architecture allows for design-time integration 15 and run-time reconfiguration with a high level "Integrated Development Environment". 4. Microprocessor Architecture: General purpose software/hardware microprocessor architecture incorporates at least three distinct functions 20 (modules), as follows: 1) Fetch - Fetching the next instruction, 2) Decode - Decoding the fetched instruction and, 3) Execute - Executing the decoded instruction. 25 These functions are implemented as specific modules within the architecture. Modules within the architecture can be considered to be the actual configuration of the microprocessor at the gate level, which implements the desired method and system of the invention. These modules communicate 30 through single or multiply configured distinct and static system buses.

WO 2009/015432 PCT/AU2008/001104 25 In addition the number of Fetch, Decode and Execute modules are also fixed and their location within the microprocessor is static. As well the particular module behavior or function is static and cannot be changed during operation. 5 This embodiment includes a microprocessor architecture which is implemented using Containers. A Container-based architecture decomposes an application at run-time into a cooperative collection of distributed and networked computational behaviors and data that communicate through 10 generative communications. In this case the concurrent system bus communication is facilitated through Source entities, which is implemented through instructions added to the underlying instruction set, implemented by the microprocessor. This Source entity based bus structure can take any network topological form including for example mesh network. 15 With reference to Fig 10, Data (instructions), exceptions and clock I/O are written to the system bus upon input, following which embedded Container(s) with requisite Behavior decodes the input data, and dynamically configures necessary Behavior in order to process data. 20 This architecture has at least the following applications: 1. The hardware implementation of Containers allows for reactive adaptation and self reconfiguration to optimize performance and 25 reduce power consumption and heat dissipation determined by: 1) current load and 2) resource conditions. This mechanism is managed by the Containers. 2. The "dynamic pipeline" of the system bus generated by the 30 interaction of Containers and the resultant output realizes multiple independent and dynamically time varying system buses. Whereas WO 2009/015432 PCT/AU2008/001104 26 current architectures statically configure these inter-processor communications systems and implemented complex topologies such as superscalar and mesh network buses, this architecture dynamically produces a communication topology "best fit for the 5 current operating conditions at the time". 3. The dynamic reconfigurable nature of Containers and Behaviors allows for system modules such as the mentioned Fetch, Decode and Execute modules for example to be changed during operation. 10 That is a different Behavior for the Decode module for instance may produce the ability to process different instruction sets simultaneously without a-priori design. An example being the provision of dynamic "instruction level virtualization" environment for executing disparate languages based upon for example RISC or 15 CISC to execute concurrently and be able to benefit from the load sharing and resource management capabilities inherent in the underlying Container architecture upon which this architecture is implemented. 20 4. The dynamic reconfigurable nature of Containers allows for the respective modules and their interactions to change during run time, dependent upon current conditions and respective requirements. For instance this architecture may allow for altogether new modules to be added during operation, which allows 25 for reconfiguration of overall behavior of the microprocessor or allows for reconfiguration to allow for improved performance for minimization of resources such as power and heat production. 5. The dynamic reconfigurable nature of Containers and Behaviors 30 and the reactive coordination foundation of this technology allows for run-time self-reconfiguration of the microprocessor.

WO 2009/015432 PCT/AU2008/001104 27 6. A self-reconfigurable architecture allows for design-time integration and run-time reconfiguration with a high level "Integrated Development Environment", through the use of system call-backs 5 and hooks implemented in the Container microprocessor net-list.

Claims (20)

1. A method for the ongoing dynamic assignment and scheduling of all supported units of execution including Behavior entities and/or Data entities between Container entities at run-time, said Container entities being 5 associated with one or more Processors, where the assignment and scheduling algorithms are not a-priori defined, but are derived from current runtime operating conditions.
2. The method of claim 1 wherein the Container entities contain at least one Behavior entity and/or at least one Data entity; and 10 wherein a Container entity is able to retrieve, create, execute and store Behavior entities and Data entities.
3. The method of claim 2 including the step of providing a probability distribution of Data entity types; wherein the probability distribution of Data entity types is variable with 15 time, not a-priori defined, and is derived from current runtime operating conditions.
4. The method of claim 2 including the step of providing a probability distribution of Behavior entity types; wherein the probability distribution of Behavior entity types is variable 20 with time, not a-priori defined, and is derived from current runtime operating conditions.
5. The method of claim 1 wherein a Container entity instantiates one or more additional Container entities with each Processor, and is derived from current runtime operating conditions; and 25 wherein Container entities contain at least one Behavior entity and at least one related Data entity, whereby for every Data entity there exists a Behavior entity.
6. The method of claim 1 including facilitating non a-priori defined communication, derived from current runtime operating conditions, of: 30 Behavior entities to Behavior entities; Container entities to Container entities; and Behavior entities to Container entities. 29
7. The method of claim 6 including facilitating internal communication via a Source entity; wherein the Source entity comprises shared associative memory; and wherein communication is external, employs TCPIP protocols, and 5 includes read, write and take semantics.
8. The method of claim 1 including constructing applications from combinations of Data entities and Behavior entities, where granularity of computational work unit size is not a-priori defined but is dynamic and derived from current operating conditions; 10 wherein the step of assigning and scheduling supported units of computation is not a-priori defined and is determined from load and resource conditions.
9. The method of claim 8 wherein applications are constructed from distributed and networked Behavior entities, where granularity of 15 computational work unit size is not a-priori defined but is dynamic and derived from current operating conditions; wherein combinations of Behavior entities and Data entities are associated in short term and long term.
10. The method of claim 2 including associating a Behavior entity with a 20 logical stage of execution.
11. The method of claim 1 wherein a Container entity may perform Data Retrieval, Data Behavior Mapping, Behavior Retrieval, Execution, Finalization and Aggregation.
12. The method of claim 11 wherein Data Retrieval includes reactively 25 retrieving Data entities at run-time using generative communications; wherein Data Behavior Mapping includes reactively mapping Data entities and Behavior entities at run-time; and wherein Behavior retrieval includes reactively retrieving Behavior entities at run-time using generative communications. 30
13. A system comprising at least one Processors and/or one or more Container entities associated with one or more of the at least one Processors, the system dynamically assigning and scheduling all supported units of execution between Container entities at run-time, where the 30 assignment and scheduling algorithms are not a-priori defined, but are derived from current runtime operating conditions.
14. The system of claim 13 comprising Behavior entities executable by the Container-entities; and 5 Data entities that are input to or output from Behavior entities; wherein the Container entities retrieve, create, execute and store Behavior entities and Data entities.
15. The system of claim 13 comprising Source entities that facilitate communication of: Behavior entities to Behavior entities; Container entities to 10 Container entities; and Behavior entities to Container entities; wherein the Source entities share associative memory for storage and retrieval of Data entities and Behavior entities.
16. The system of claim 13 including Behavior entities and Data entities wherein the Behavior entities and the Data entities are addressable, 15 reconfigurable and temporally and spatially uncoupled.
17. The system of claim 15 wherein the Source entities are addressable and reconfigurable.
18. The system of claim 13 wherein the at least one Processors are selected from; distributed processors; networked processors; homogeneous 20 multi-core processors, heterogeneous multi-core processors; or multi-thread processors.
19. The system of claim 13 wherein the Container entities comprise monitoring code for debugging and/or performance analysis and employ dynamic resource control. 25
20. A method of dynamically composing a software application from one or more Container entities containing one or more Behavior entities and/or one or more Data entities at run-time, the method including the step of dynamically assigning and scheduling all supported units of execution including Behavior entities and/or data entities between multiple processors, 30 where the assignment and scheduling algorithms are not a-priori defined, but are derived from current runtime operating conditions.
AU2008281324A 2007-07-30 2008-07-30 A method and system for reactively assigning computational threads of control between processors Ceased AU2008281324B2 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
AU2007904048A AU2007904048A0 (en) 2007-07-30 System and Method for Delivering Highly-Concurrent On-Line Internet Services
AU2007904048 2007-07-30
AU2007101028 2007-10-22
AU2007101028 2007-10-22
AU2008281324A AU2008281324B2 (en) 2007-07-30 2008-07-30 A method and system for reactively assigning computational threads of control between processors
PCT/AU2008/001104 WO2009015432A1 (en) 2007-07-30 2008-07-30 A method and system for reactively assigning computational threads of control between processors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2008281324A AU2008281324B2 (en) 2007-07-30 2008-07-30 A method and system for reactively assigning computational threads of control between processors

Publications (2)

Publication Number Publication Date
AU2008281324A1 AU2008281324A1 (en) 2009-02-05
AU2008281324B2 true AU2008281324B2 (en) 2010-06-10

Family

ID=40303809

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2008281324A Ceased AU2008281324B2 (en) 2007-07-30 2008-07-30 A method and system for reactively assigning computational threads of control between processors

Country Status (3)

Country Link
AU (1) AU2008281324B2 (en)
GB (1) GB2470973B (en)
WO (1) WO2009015432A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWM438603U (en) * 2012-05-24 2012-10-01 Justing Tech Taiwan Pte Ltd Improved lamp casing structure

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6922685B2 (en) * 2000-05-22 2005-07-26 Mci, Inc. Method and system for managing partitioned data resources
US7526515B2 (en) * 2004-01-21 2009-04-28 International Business Machines Corporation Method and system for a grid-enabled virtual machine with movable objects
US20060294401A1 (en) * 2005-06-24 2006-12-28 Dell Products L.P. Power management of multiple processors

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
T. A. MARSLAND ET AL: A Study of Software Multithreading in Distributed Systems. Technical Report TR 95-23 [online], 20 November 1995 *
T. HOLVOET ET AL: Towards Generative Software Composition. In proc. Of the Thirty-first Annual Hawaii International Conference on System Sciences [online]. 1998 *

Also Published As

Publication number Publication date
AU2008281324A1 (en) 2009-02-05
GB201000157D0 (en) 2010-02-24
GB2470973A (en) 2010-12-15
WO2009015432A1 (en) 2009-02-05
GB2470973B (en) 2012-10-10

Similar Documents

Publication Publication Date Title
Warneke et al. Exploiting dynamic resource allocation for efficient parallel data processing in the cloud
Kaiser et al. Parallex an advanced parallel execution model for scaling-impaired applications
Caromel et al. Towards seamless computing and metacomputing in Java
Teich et al. Invasive computing: An overview
EP1730628B1 (en) Resource management in a multicore architecture
Marlow et al. A monad for deterministic parallelism
US8291006B2 (en) Method for generating a distributed stream processing application
US7984448B2 (en) Mechanism to support generic collective communication across a variety of programming models
US10268609B2 (en) Resource management in a multicore architecture
Barney Introduction to parallel computing
CN100576841C (en) System and method for processing client request on host computer network
Leyton et al. Skandium: Multi-core programming with algorithmic skeletons
Iannucci Toward a dataflow/von Neumann hybrid architecture
Bosilca et al. Parsec: Exploiting heterogeneity to enhance scalability
EP1615129A2 (en) Implementation of concurrent programs in object-oriented languages
US7159211B2 (en) Method for executing a sequential program in parallel with automatic fault tolerance
US20170017469A1 (en) Declarative Software Application Meta-Model and System for Self-Modification
US20100115236A1 (en) Hierarchical shared semaphore registers
US8707281B2 (en) Performing parallel processing of distributed arrays
US8250550B2 (en) Parallel processing of distributed arrays and optimum data distribution
US20070150895A1 (en) Methods and apparatus for multi-core processing with dedicated thread management
US7689998B1 (en) Systems and methods that manage processing resources
Aldinucci et al. Behavioural skeletons in GCM: autonomic management of grid components
González‐Vélez et al. A survey of algorithmic skeleton frameworks: high‐level structured parallel programming enablers
US9542231B2 (en) Efficient execution of parallel computer programs

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)
MK14 Patent ceased section 143(a) (annual fees not paid) or expired