WO2006042841A2 - Action sur un systeme d'interet - Google Patents

Action sur un systeme d'interet Download PDF

Info

Publication number
WO2006042841A2
WO2006042841A2 PCT/EP2005/055310 EP2005055310W WO2006042841A2 WO 2006042841 A2 WO2006042841 A2 WO 2006042841A2 EP 2005055310 W EP2005055310 W EP 2005055310W WO 2006042841 A2 WO2006042841 A2 WO 2006042841A2
Authority
WO
WIPO (PCT)
Prior art keywords
model
actor
meta
event
actions
Prior art date
Application number
PCT/EP2005/055310
Other languages
English (en)
Other versions
WO2006042841A8 (fr
Inventor
Peter Hawkins
Original Assignee
Manthatron-Ip Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Manthatron-Ip Limited filed Critical Manthatron-Ip Limited
Priority to JP2007536195A priority Critical patent/JP5128949B2/ja
Priority to GB0613704A priority patent/GB2425868B/en
Priority to CA2583921A priority patent/CA2583921C/fr
Priority to AU2005296859A priority patent/AU2005296859B2/en
Priority to CN2005800376254A priority patent/CN101288090B/zh
Priority to EP05806079A priority patent/EP1805704A2/fr
Publication of WO2006042841A2 publication Critical patent/WO2006042841A2/fr
Publication of WO2006042841A8 publication Critical patent/WO2006042841A8/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B17/00Systems involving the use of models or simulators of said systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B17/00Systems involving the use of models or simulators of said systems
    • G05B17/02Systems involving the use of models or simulators of said systems electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • G06F8/24Object-oriented
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3851Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution from multiple instruction streams, e.g. multistreaming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/08Computing arrangements based on specific mathematical models using chaos models or non-linear system models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling

Definitions

  • the invention relates generally to acting on a subject system using a model.
  • the invention relates in its various aspects to an actor for effecting action in a subject system, to a logic-based computing device including an actor for effecting action in a subject system, to a complier, to methods of effecting action in a subject system, to a method of controlling a complier, and to a method of modelling a subject system.
  • a system is defined as "... a set of interrelated elements", the state of a system is defined as “... the set of relevant properties which that system has at [a moment of] time” and the environment of a system is defined as "... a set of elements and their relevant properties, which elements are not part of the system but a change in any of which can produce a change in the state of the system”.
  • an event is defined as “... a change in the ... state of the system (or environment)” and the paper describes the dependency of systems changes on events in the system or its environment, through “reactions”, “responses” and “autonomous acts”.
  • Figure 1 One way in which a modern system could be illustrated is given in Figure 1.
  • a product 1-2 such as a general purpose computer system or an embedded computer system.
  • the product 1-2 which is also a system within the wider system of Figure 1, may be employed as an actor in an enterprise 1-1 (which may also be viewed as a kind of system) in the provision of a service 1-3 such as online technical support.
  • the service 1-3 constitutes another system.
  • Products 1-2 and/or services 1-3 can be consumed by other enterprises (not shown) or by consumers 1-4, e.g. a person 1-5, a group 1-6 or a household 1-7.
  • the consumer 1-4 may also be modelled as a system.
  • Enterprises 1-1 and consumers 1-4 are socio-technical systems, and services 1-3 are usually delivered through a combination of technology and human resources. This means that in analysing, designing, constructing, testing, implementing and operating modern complex, adaptive systems, it is desirable to address more than just the technical requirements of computers and other machinery. For maximum effect, it is important to understand how industrial and other processes interact with applications and how people are organised to execute the processes of these applications.
  • Figure 2 represents a conventional Von Neumann computer. It contains a Central Processing Unit (CPU) 2-3, which contains a Control Unit 2-4 and an Arithmetic & Logic Unit (ALU) 2-5.
  • the computer also contains a Memory 2-6, and an Input/Output (I/O) Controller 2-2.
  • the CPU 2-3, the Memory 2-6 and the I/O Controller 2-2 communicate via an internal Bus 2-1.
  • the fetch, decode, execute cycle of such a computer is operated under the control of a Program Counter 2-7, included in the CPU 2-4.
  • the Program Counter 2-7 increments after each instruction, so the next action obtained is the next instruction in sequence.
  • Communicating Sequential Processes was devised by Professor C.A.R. Hoare ("Communicating Sequential Processes", Communications of the ACM, vol. 21, pages 666-677, 1978), building on the work of Dijkstra (DlJKSTRA, E. W. (1975). "Guarded Commands, Nondeterminacy and Formal Derivation of Programs", Communications of the ACM, vol. 18, pages 453-457).
  • Communicating Sequential Processes introduced parallel sequential processes capable of communicating via synchronised input and output commands. Initial work in this area was targeted at new programming languages, but has since been taken up in hardware designs, for example in Inmos's Transputer.
  • FIG. 3 illustrates an early Dataflow processor designed by Dennis and Misunas at MIT in 1975.
  • the MIT Dataflow Machine includes a set of Processing Elements 3- 1 , which are interconnected through a Communication Network 3-2.
  • an Activity Store 3-5 holds activity templates.
  • An Instruction Queue 3-4 holds the addresses of fired instructions (i.e. activity templates for which all inputs are available).
  • the first entry in the Instruction Queue 3-4 is removed by a Fetch Unit 3-9, which uses the entry to fetch the corresponding opcode, data, and destination list which constitute the activity template held in the Activity Store 3-5. This is then packed into an operation token, which is forwarded by the Fetch Unit 3-9 to an available Operation Unit 3-3.
  • the Operation Unit 3-3 executes the operation specified by the opcode using the corresponding operands, generates result tokens for each destination, and provides them to a Send Unit 3-8, which decides whether the destination of the token is in a local Processing Element 3-1 , or is in a remote one. If the destination is determined to be local, the token is sent to a local Receive Unit 3-7 that, in turn, passes it to an Update Unit 3-6. Otherwise, the token is routed to the destination Processing Element 3-1 through the Communication Network 3-2. Instructions are processed in a pipeline fashion since all units operate concurrently.
  • Behaviours are defined as collections of, possibly conditional, "commands” which will cause the actor to create other actors and/or send further communications, as well as potentially modifying its own behaviour in response to the next communication it may receive.
  • the only type of event which actor systems recognise is the creation of a new "task” (i.e. "communication”).
  • communication i.e. "communication”
  • all activity within an actor system is driven by the propagation of communications between actors. This is both a strength and a weakness of actor systems.
  • the Actor model is powerful enough to implement any system which can be defined within the Communicating Sequential Process or Dataflow models described above. However, it limits the degree of granularity of concurrency to individual behaviours, each of which may include multiple, conditional commands. It can also be said that the Actor model widens the semantic gap between modern systemic applications and the underlying computational environment, such as requiring convoluted mechanisms for handling non-communication related events, thereby limiting its real-world application.
  • Figure 5 shows how a modern layered computer operating system might be designed, similar to that described in TANENBAUM, ANDREW S. "Modern Operating Systems” Prentice Hall, 2001.
  • a computer operating system manages the underlying computer hardware resources (e.g. memory, disks, printers, etc.) and provides a "virtual machine” more suitable to the needs of the users and programmers of the computer system concerned. It comprises seven virtual layers, 5-11 to 5-17 running on a CPU 5-20.
  • the first layer 5-11 hides the underlying hardware, providing a low-level virtual machine on top of which a common operating system can be designed to support several different hardware configurations.
  • the second layer 5-12 provides the basic building blocks of the operating system, including interrupt handling, context switching and memory management.
  • the third layer 5-13 contains the components essential to managing processes, and, more specifically, threads, within the operating system, which is essential to providing a multiprocessing environment on essentially a single, sequential CPU.
  • the fourth layer 5-14 provides the drivers which handle all activity involving each of the specific peripherals or resources which are or which may be connected to the CPU. Above this is the virtual memory management layer 5-15, which enables the computer to offer memory spaces to its users which are apparently significantly in excess of the physical memory available.
  • the sixth layer 5-16 provides the features necessary to support the management of files held on disks and other long term storage media.
  • the seventh and top layer 5- 17 handles system calls and thereby provides the interface through which user programs 5-18 can make calls on the system resources.
  • meta-data In recent years, there has been a considerable increase in the awareness of the importance of understanding meta-data, which can be thought of as "data about data”. This increased awareness seems to have come from two main sources, namely: the need to realise the value of data warehousing and business intelligence solutions; and the need to reduce the effort associated with developing, maintaining and exchanging information between websites. Consequently, there has been growing interest in the software communities in technologies such as XML and the Meta-Object Framework (MOF). The value of meta-data in the realm of data interchange is well understood. However, in most cases, the meta-data is not available to users or their applications, being either implicit in the application, or held in analysis and design phase models which have not been translated through to the ultimate application. This inhibits the flexibility of most such applications to adapt as the context in which they are being used changes.
  • MOPs Metaobject Protocols
  • CLOS Common LISP Object System
  • MOPs involve base level application programming in the context of meta-level programs which describe the objects in the programming language or system itself.
  • MOPs are only available to specialist programmers during development and are not available for modification at run time - i.e. post compilation or interpretation. They also appear to add considerably to the complexity of execution, making it difficult for them to support concurrent execution in particular.
  • Nye US2003/0195867
  • An agent 6-0 takes performance standards and inputs from an environment, and acts on the environment.
  • a performance element 6-1 takes percepts as inputs from sensors 6-2 and decides on actions, which it then outputs to effectors 6-3.
  • a learning element 6-4 takes some knowledge about the performance element 6-1 together with input from a critic 6-5 concerning how the agent is performing, and determines what changes to send to the performance element 6-1 with a view to improving performance for the future.
  • a problem generator 6-6 suggests actions to the performance element 6-1 that will lead to new and informative experiences, based on learning goals provided from the learning element 6-4.
  • the performance element 6- 1 often comprises knowledge in the form of "IF ⁇ condition> THEN ⁇ inference>" rules, known as “production rules".
  • production rules Such systems use these rules together with "forward-chaining" search algorithms to find inferences from which actions can be initiated and from which, sometimes, new "knowledge” can be generated.
  • Expert Systems have had some considerable technical effect and commercial success, as did the Rl system described above, production rules were found to have poor performance, due to the resources consumed by forward chaining search algorithms, and limited application. Consequently, designers often had to resort to other methods to capture knowledge, such as frame systems and neural networks. As a result, several different knowledge representation mechanisms, which are difficult to integrate, can be present in modern-day knowledge-based systems.
  • Computer software has been described and/or designed using several graphic techniques from conventional flow charts, such as those shown in Figure 7, to object-oriented techniques, such as those described in the Unified Modelling Language (UML) - and associated languages, such as the Object Constraint Language (OCL).
  • UML is illustrated by the UML class diagram and the UML activity diagrams of Figures 8A and 8B respectively.
  • Mathematical set theory has also been employed, through languages such as "Z”.
  • VDHL Very High Definition Language
  • Simulators can be considered to be from one of two classes. These are: analytic simulators, which are used to understand the dynamics of a whole system, usually while it is being designed, constructed or modified; and digital virtual environments, which are used to enable humans to interact with complex virtual systems, typically in a gaming or training situation.
  • DEVS models system dynamics through sets of equations representing: a set of input values, each associated with an external event; a set of states; a set of output values; an internal state transition function; an external state transition function; an output function; and a resting time function.
  • the only events recognised by such simulations are the passage of a pre-determined period (i.e. the resting time) constituting an internal event, or the arrival of an input, constituting an external event.
  • a pre-determined period i.e. the resting time
  • these internal and external events cause activity within such a simulation, this is as a coarse-grained trigger to a state transition function, which typically is implemented as a sequential program or method within an object- oriented programming language.
  • PDES Parallel & Distributed System Simulation
  • FUJIMOTO FUJIMOTO
  • RICHARD M "Parallel and Distributed Simulation Systems", John Wiley & Sons, 2000
  • parallel and distributed simulation models can be described in terms of collections of DEVS simulators.
  • PDES typically offers coarse-grained parallelism (concurrency is achieved through assigning separate sequential logical processes to different processors).
  • the fundamental difference between parallel and distributed simulations is communication latency between "logical processes" in the simulation.
  • time involved in a simulation namely physical time, simulation time and wall-clock time and that these are particularly important for PDES.
  • Physical time is the time in the physical system being simulated.
  • Simulation time is an abstraction of the physical time employed by the simulation, which typically is speeding up (although it may be slowing down) physical time to meet the needs of the simulation.
  • Wall-clock time is the actual time during the execution of the simulation, and is particularly important for distributed simulations.
  • the first is the gap which has developed between high-level languages and the underlying computer hardware as hardware designers have attempted to work around the limitations of the Von Neumann Architecture with techniques such as pipelining.
  • the second is the gap between the problem domain and the software, which has become more acute as the nature of the problem has adapted from being one of computation to one of system representation.
  • the interactions between different components of systems are often overlooked because they are addressed by people from different disciplines who have different tools (such as different modelling techniques) for looking at their component of the situation.
  • Many IT project failures are caused by failing to recognise the "people change management" implications of a particular technical solution, primarily because application developers and change management specialists lack the tools and language to help them understand others' points of view.
  • the invention provides several related concepts many of which individually, and in combination, create a common, holistic framework, potentially enabling parallel execution of adaptable rule sets and eliminating the semantic gaps between problem domains and software and hardware solutions.
  • an actor for effecting action in a subject system comprising: a model of the subject system, the model including: objects which represent objects within the subject system, each object in the model having associated therewith data which define which of two or more states the object is in at a given time, and rules which define actions which are to be initiated in response to events, an event being the change in the state of an object, and a processor, arranged: to respond to an event in the subject system by initiating one or more actions in the subject system which are dependent on the event, as defined by the rules of the model.
  • the processor of an actor constructed according to the invention may not be constrained to an arbitrarily sequential paradigm, such as that created by the Von Neumann Program Counter, so is able to be constructed so as to avoid disadvantages thereof.
  • the processor of an actor so constructed can be capable of supporting fine-grained parallel operation.
  • the actor according to the invention allows the use of a model which does not depend on programmatic components, such as OO "Methods", to describe detailed actions.
  • the model can be directly capable of driving fine-grained parallel actors (e.g. computers, industrial machinery).
  • the model can be made to be capable of expressing constraints which would normally require additional language (e.g. OCL for UML).
  • AU events and actions may be constituted as objects. This contributes to simpler models which are easier to use.
  • the processor may be arranged to execute the actions initiated in response to a single event directly in a single step. Such actions can be referred to as 'elementary actions'.
  • a composite action initiated in response to an event may be defined in terms of sub-actions, initiated in response to events directly or indirectly resulting from the execution of the composite action.
  • Each sub-action may be either an elementary action or another composite action. This allows the more accurate representation of certain events in the real world, and can allow simpler modelling, as well as enabling fine-grained parallel execution.
  • an elementary action which can be executed by the processor directly in a single step at one level may require elaboration to multiple lower level actions for the underlying processor on which the higher level processor is elaborated. However, in this case, each of these lower level actions will in turn be event-driven in the same sense.
  • the model may contain two or more sub-models, each of which is a model of a sub ⁇ system of the subject system. Using this feature, one processor can operate multiple models.
  • the actor includes a meta-actor comprising a meta-model and a meta- processor, the meta-model including: objects which represent objects within the model, each object in the model having associated therewith data which define which of two or more states the object is in at a given time, and rules which define actions are to be initiated in response to events, events being changes in states of objects in the model, and the meta-processor being arranged: to respond to an event in the model by initiating one or more actions in the model which are dependent on the event, as defined by the meta-model.
  • the meta-model is explicit, the model can be modified while being executed, allowing the actor to adapt its behaviour whilst it is in operation, potentially avoiding the requirement of the meta-model being fixed in advance, for example by compiler or toolset providers.
  • the meta-model may form part of the model, and/or the meta-processor may form part of the processor. These features can allow the actor to take on new types of model while it is in operation.
  • the meta-model may contain two or more sub-meta-models, each of which is a model of a sub-system of the model. In this way, one meta-processor can operate multiple meta-models.
  • the action of the processor is effected by a root meta-actor, comprising a root meta-model and a root meta-processor, the root meta-model being a generic model for processing a model of the same type as the model, and including: objects which represent objects within a generic model execution system, each object in the model having associated therewith data which define which of two or more states the object is in at a given time, and rules which define actions which are to be initiated in response to events, an event being the change in the state of an object, and the root meta-processor being arranged to guide the root meta-model: when an event is triggered, to examine the definition of the event in the model to determine what actions, if any, should be initiated, and to initiate those actions; when an action is initiated, to examine the definition of the action in the model to determine what objects, if any, should have their state changed, and to change the states of those objects accordingly; and when the state of an object is changed, to examine the definition of the object in the model to determine what events, if any
  • the actor includes a meta-actor and a root meta-actor.
  • the meta-model since the meta-model is explicit, the model can be modified while being executed, allowing the actor to adapt its behaviour whilst it is in operation.
  • the meta-model may form part of the model. This can also allow the actor to take on new types of model while it is in operation.
  • the root meta-model may form part of the meta-model. This can allow the root meta-model to be modified while being executed, under certain circumstances. This potentially also avoids the requirement of the meta-model being fixed in advance, for example by compiler or toolset providers.
  • the processor may comprise: one or more activators, operable in response to the triggering of an event to examine the definition of the event in the model, to determine which actions, if any, to initiate and then to initiate those actions; one or more executors, arranged to effect the actions and being operable in response to the initiation of an action to examine the definition of the action in the model to determine what objects, if any, in the subject system to change the state of, and then to change the states of those objects accordingly; one or more recorders, arranged to record the outcomes of actions and being operable in response to recognising the changing of the state of an object to examine the definition of the object in the model to determine therefrom what events in the subject system, if any, should result from the changes in the states of the objects and to trigger those events; one or more interfaces to external channels via which the actor is connected to other actors or to the outside world; and one or more internal channels, via which the activators, executors, recorders and interfaces are
  • the meta-processor may comprise: one or more activators, operable in response to the triggering of an event to examine the definition of the event in the meta-model to determine which actions in the model to initiate, and then to initiate those actions; one or more executors, arranged to effect the actions and operable in response to the initiation of an action to examine the definition of the action in the meta-model to determine what objects, if any, in the model to change the state of, and then to change those objects accordingly; one or more recorders, arranged to record the outcomes of actions, including the creation, modification or deletion of objects within the meta-model, and operable in response to the changing of the state of an object to examine the definition of the object in the meta-model to determine therefrom what, if any, events in the model should result from the changes in the states of objects, and to trigger those events; one or more interfaces to external channels via which the actor is connected to other actors or to the outside world;
  • the root meta- processor may comprise: one or more activators, operable in response to the triggering of an event to examine the definition of the event in the root meta-model to determine which actions to initiate; one or more executors, arranged to effect the actions and operable in response to the initiation of an action to examine the definition of the action in the root meta-model to determine what objects, if any, in the processor to change the state of, and then to change those objects accordingly; one or more recorders, arranged to record the outcomes of actions, including the creation, modification or deletion of objects within the root meta-model, and operable in response to the changing of the state of an object to examine the definition of the object in the root meta-model and to determine therefrom what events in the processor, if any, should result from the changes in the states of objects, and to trigger those events; one or more interfaces to external channels via which the actor is connected to other actors or to the outside world
  • One or more of the activators, executors, recorders, interfaces or channels may be an actor as recited above. This allows for the decomposition of processor components according to the same model.
  • the model may elaborate a processor entity comprising one of a) a processor, b) a meta-processor or c) a root meta-processor of a virtual actor, thereby to enable the processor entity to be directly executed by any of a processor, meta-processor or root meta-processor of the first actor. This provides for the layering of virtual processors on top of physical processors.
  • Such an actor may be a simulator which initiates actions in a proxy subject system, and in this case the actor may include a model which includes further rules for: handling the relationships between simulation, physical and wall-clock time; handling the distribution of internal and external events defined in the detailed model of the first actor; handling the creation and deletion of simulated actors, and their assignment and reassignment of roles within the model being simulated; and handling the assignment and elaboration of simulated actors to physical actors.
  • the virtual actor may be arranged to elaborate a processor entity of a further virtual actor.
  • This provides for multiple layers of virtual processors, e.g. for use in an operating system.
  • the model may contain two or more sub-models, each of which elaborates the behaviour of a processor entity of one or more other virtual actors. This allows multiple virtual processors to be supported by a single physical processor.
  • the invention also provides a system comprising two or more actors each as described above and being arranged to effect action in a common subject system, each actor being connected to each of at least one of the other actors via a respective channel.
  • the channels enable the actors to communicate with each other.
  • Each actor may be connected to its respective channel by a first interface forming part of the actor and by a second interface forming part of the channel.
  • the channel is provided with an interface to an actor.
  • any or all of the channels or interfaces may be actors as described above. Providing some or all of the channels and/or interfaces in this way allows them to have the advantages outlined above in connection with the actor of the invention.
  • An actor outlined above may comprise a system as outlined above. This allows an actor to comprise a system of actors.
  • a logic-based computing device including an actor for effecting action in a subject system, the device comprising: means for implementing a model of the subject system, the model including: objects which represent objects within the subject system, each object in the model having associated therewith an indication of which of two or more states the object is in at a given time, and rules which define actions which are to be initiated in response to events, an event being the change in the state of an object, and means for implementing a processor, arranged: to respond to an event in the subject system by initiating one or more actions in the subject system which are dependent on the event, as defined by the rules of the model.
  • This provides a logic based computing device which can provide in the computing domain all the benefits listed above as deriving from the actor of the invention.
  • the processor of this logic-based computing device may comprise one or more activators, one or more executors, one or more recorders, one or more internal channels and one or more interfaces.
  • This provides a device which is capable of fine-grained parallel operation. This is unlike the Von Neumann paradigm, which necessarily is sequential. It is also unlike the CSP paradigm, which supports only coarse-grained parallelism. It also is applicable to more general situations than either the Dataflow or Actor paradigms.
  • At least one activator includes: an event queue register, operable to contain a reference to an item in the event queue; an event register, operable to contain a reference to a current event; and an event type register, operable to contain a type of the current event.
  • At least one executor includes: an action queue register, operable to contain a reference to an item in the action queue; an action register, operable to contain a reference to a current action; and an action type register, operable to contain a type of the current action.
  • Any such device may comprise a single activator, a single executor, a single recorder, one or more internal channels and one or more interfaces together provided on a single processing unit or integrated circuit.
  • This allows the device to be implemented using a single processor, equivalently to a conventional computer with a single CPU, but which is capable of processing the model of the actor, which here is a parallel model, directly.
  • the device may comprise a single activator, a single executor, a single recorder and one or more interfaces, each of these components being provided on a respective processing unit or integrated circuit, the components being connected to each other via one or more channels.
  • This provides a multi-processor implementation, capable of directly processing the same parallel model with no modifications, but with greater throughput. This has no real equivalent in conventional multiple CPU architectures, which tend to add considerable complexity in order to translate the sequential description of the system embodied in a program to allow it to be executed in parallel.
  • plural activators can be connected via the one or more channels, and at least two of the activators may share a common event queue. This relates to the provision of fine-grained parallel activation.
  • plural executors can be connected via the one or more channels, and at least two of the executors can share a common action queue. This relates to the provision of fine-grained parallel execution.
  • plural recorders can be connected via the one or more channels, and at least two of the recorders can share a common object queue. This relates to the provision of fine-grained parallel recording.
  • the activator or one or more of the activators each may comprise any of the logic- based devices above.
  • the executor or one or more of the executors each may comprise any of the logic-based devices above.
  • the recorder or one or more of the recorders each may comprise any of the logic-based devices above.
  • a model of the actor may contain rules which enable the elaboration of a processor, a meta-processor or a root meta-processor onto a computer or computer system with Von Neumann-like architecture or onto an operating system which manages the resources of, or provides a simple interface to, a computer or computer system with VonNeumann- like architecture.
  • This allows conventional Von Neumann hardware to operate as a pseudo-root meta-actor-based computing system or device.
  • Another aspect of the invention provides a computer system having plural resources each resource being managed by, and constituting a subject system of, an actor or a computing device as described above. This allows a configuration providing a tightly- or loosely-coupled integrated computing system, such as a personal computing system. Distributing interface activity to dedicated interface processors can free a primary processor from being involved in such activities (unlike in a conventional Von-Neumann processor), potentially significantly increasing throughput of the primary processor.
  • Another aspect of the invention provides one of certain of the above actors and a compiler arranged to use a model of a Von Neumann computer or a layered operating system to translate statically the model of the actor into the object or code of a Von Neumann computer or computer system or an operating system which manages the resources of or provides a simpler interface to a Von Neumann computer or computer system.
  • the compiler includes a meta- translator and a hardware meta model arranged to translate statically the model of the actor.
  • a system comprising plural actors with elaborating models as described above may be arranged together to manage the resources of a computer system and enable the simultaneous execution of one or more models, meta-models or root meta-models on that computer system.
  • This system can constitute an operating system operating according to the advantageous principles described in this specification.
  • the computer system may be a Von Neumann-like computer or computer system, or it may be a root meta-actor logic-based computing device or system of such devices.
  • the processor may comprise a logic based computing device as defined above. This can enable massively parallel simulations. The complexity of managing massively parallel simulation can be considerably reduced compared to the situation if it were performed on conventional Von Neumann hardware, even if parallel.
  • a complier arranged to use a model of a Von Neumann-like computer or a model of an operating system which manages the resources of or provides a simpler interface to, a Von Neumann- like computer to translate statically an application model comprising: objects which represent objects in the application, each object in the application model having associated therewith data which define which of two or more states the object is in at a given time; and rules which define actions which are to be initiated in response to events, an event being the change in the state of an object, into the object or machine or assembler code of a Von Neumann-like computer or computer system or an operating system which manages the resources of, or provides a simpler interface to, a Von Neumann-like computer or computer system.
  • the application model may additionally include any of the other features of models of the actors of the claims.
  • the invention also provides a system comprising first to fifth actors, each of the actors being an actor as described above, in which the first to fifth actors are respective ones of: an operating actor, operable to perform an operation or transformation process; a directing actor operable to determine purpose and performance goals of the system, and to provide the other actors with those goals; a managing actor operable to control all other actors within the system with the aim of achieving the purpose and performance goals provided by the directing actor; a learning actor, operable to develop and maintain at least one model forming part of each of the other actors; and an enabling actor, operable to provide sub-actors operable to perform the duties of the first to fifth actors.
  • This provides components with which a viable system (i.e. a system able to maintain a separate existence, such as an artificially intelligent agent or a business enterprise) can be
  • One or more of the first to fifth actors may each comprises a system as described in the paragraph immediately above. This offers a recursively defined viable system. This is particularly useful when designing complex viable systems, such as viable intelligent agents or viable business enterprises.
  • the operating actor advantageously is arranged to operate a change system comprising: an investigation sub-system arranged to investigate a problem or opportunity in a target system to determine what needs to be changed, by modelling of rules in terms of objects, actions and events and assignment and elaboration of the roles of actors in one or more of the target system, a problem system and an opportunity system and simulating the resulting model to test the detailed rules and to analyse the dynamics of the resulting system; a development sub-system responsive to the completion of the investigation sub-system to model and simulate change to the target system by modelling of the objects, rules and actors in the target system and any other system modelled in the investigation system, and simulating the resulting model to test the detailed rules and to analyse the dynamics of the resulting system; a preparation sub-system responsive to the completion of the investigation sub-system to model and to simulate the temporary system by which the change can be deployed by modelling of the objects, rules and actors in target system and any other system modelled in the investigation system, and simulating the
  • a method of effecting action in a subject system comprising: maintaining a model of the subject system, the model including: objects which represent objects within the subject system, each object in the model having associated therewith data which define which of two or more states the object is in at a given time, and rules which define actions which are to be initiated in response to events, an event being the change in the state of an object, and controlling a processor, to respond to an event in the subject system by initiating one or more actions in the subject system which are dependent on the event, as defined by the rules of the model.
  • this method additionally is applicable to many classes of actors, including computers, machinery, people and organisations, and therefore is capable of supporting the design and development of complex socio-technical systems
  • a method of effecting action in a subject system comprising: maintaining in a logic-based computing device means for implementing a model of the subject system, the model including: objects which represent objects within the subject system, each object in the model having associated therewith an indication of which of two or more states the object is in at a given time, and rules which define actions which are to be initiated in response to events, an event being the change in the state of an object, and maintaining in the logic-based computing device means for implementing a processor, and controlling the processor to respond to an event in the subject system by initiating one or more actions in the subject system which are dependent on the event, as defined by the rules of the model.
  • a method of controlling a complier to use a model of a Von Neumann-like computer or a model of an operating system which manages the resources of or provides a simpler interface to, a Von Neumann-like computer to translate statically an application model comprising: objects which represent objects in the application, each object in the application model having associated therewith data which define which of two or more states the object is in at a given time; and rules which define actions which are to be initiated in response to events, an event being the change in the state of an object, into the object or machine or assembler code of a Von Neumann-like computer or computer system or an operating system which manages the resources of, or provides a simpler interface to, a Von Neumann-like computer or computer system.
  • a method of modelling a subject system comprising: maintaining a model of the subject system, the model including: objects which represent objects within the subject system, each object in the model having associated therewith data which define which of two or more states the object is in at a given time, rules which define actions which are to be initiated in response to events, an event being the change in the state of an object, and rules which define a composite action initiated in response to an event in terms of sub-actions each of which is initiated in response to events directly or indirectly resulting from the execution of the composite action.
  • a method of operating a system comprising: when an event is triggered, examining a definition of the event to determine what actions, if any, should be initiated, and initiating those actions, when an action is initiated, examining a definition of the action to determine what objects, if any, should be changed in terms of state, and changing those objects accordingly, and when the state of an object is changed, examining a definition of the object state change to determine whether the changed state should trigger any events, and triggering those events.
  • apparatus for operating a system, the apparatus comprising: one or more activators responsive to the triggering of an event to examine a definition of the event and to determine therefrom whether any actions should be initiated, and for initiating each of the actions so determined, one or more executors responsive to the initiation of an action to examine a definition of the action and to determine therefrom whether any objects should be changed in terms of state, and for changing the state each of the appropriate objects accordingly, one or more recorders responsive to the changing of the state of an object for examining a definition of the object state change to determine therefrom whether any events should be triggered, for triggering each of the events so determined.
  • the eighth and ninth aspects can allow truly event-based system operation, and in some implementations can allow the avoidance of flow-jump computing altogether.
  • Processor something which effects action in a subject system guided by a model.
  • Subject system a collection of tangible or conceptual interacting objects which have states, the collection having separate existence within an environment as defined by an observer or modeller, and which is the subject of action effected by one or more actors.
  • Object in a subject system a tangible or conceptual thing. Some of the objects in a subject system can have two or more states. An object in a subject system may be a physical, three-dimensional article. In some embodiments, an object in a subject system may be a component of a viable business system, such as a human, technological, financial or other resource.
  • Object in a model a thing which represents an object in the subject system and which may have two or more different states.
  • Event an instantaneous change in a discrete state of an object.
  • Activator a physical or virtual actor or device which responds to events to determine from one or more models which actions to initiate.
  • Executor a physical or virtual actor or device which effects actions to change one or more objects in accordance with one or models
  • Recorder a physical or virtual actor or device which manages objects forming part of one or more models, and recognises when changes of the discrete states of such objects trigger events according to those same models.
  • Elaborator a physical or virtual actor or device which interprets the behaviour of a processor in terms which enable the direct execution of that behaviour on another, typically lower level, processor.
  • Figure 1 is an illustration of real world adaptive or self-adaptive systems
  • Figure 2 shows the architecture of a prior art Von Neumann computer
  • Figure 3 shows a prior art dataflow machine
  • Figure 4 illustrates systematically prior art software elaboration onto computer hardware
  • Figure 5 illustrates a modern layered operating system according to the prior art
  • Figure 6 illustrates a prior art generic learning agent
  • Figure 7 shows flow charts, which might form part of a prior art manufacturing process
  • Figures 8A to 8D show known IDEF and UML models
  • Figure 9A shows prior art Information Engineering Methodology
  • Figure 9B illustrates schematically generic object-oriented methodology according to the prior art
  • Figure 10 illustrates an active system including an actor according to one aspect of the present invention
  • Figures HA and HB illustrate alternative representations of a model-based actor according to certain aspects of the invention.
  • Figure 12 elaborates on a model forming part of the Figure 11 actor
  • Figure 13 elaborates on a processor forming part of the Figure 11 actor;
  • Figure 14 illustrates a simple event-driven control model according to certain aspects of the invention.
  • Figure 15 illustrates a root meta-actor according to certain aspects of the invention
  • Figure 16 illustrates a root meta-execution cycle, which is according to certain aspects of the invention and which may be implemented using the root meta-actor of Figure 15;
  • Figures 17A and 17B illustrate one possible structural root meta-model used in the root meta-actor of Figure 15;
  • Figures 18A to 18D illustrate the behavioural components of the root meta-model of Figures 17A and 17B;
  • Figure 19 illustrates a simple event-driven production assembly model constructed according to the invention
  • Figure 20 illustrates an actor comprising first and second sub-actors, each of which acts on a subject system, according to certain aspects of the invention
  • Figure 21 illustrates a partial composite root meta-processor used with the invention
  • Figures 22A and 22B are keys to some of the other Figures.
  • Figure 23 illustrates connection of the actors according to the invention using a channel, according to certain aspects of the invention.
  • Figure 24 illustrates a root meta-processor according to and used by certain aspects of the invention;
  • Figure 25 illustrates a recursive root meta-processor according to certain aspects of the invention
  • Figure 26 illustrates layering of root meta-actors, including multi-actor elaboration, according to the invention
  • Figures 27A and 27B gives an example of an elaboration model which might form part of the Figure 26 system
  • Figure 28 illustrates an implementation of an electronic root meta-actor using a single microprocessor according to the invention
  • Figure 29 illustrates an implementation of an electronic root meta-actor using multiple processors according to the invention
  • Figure 30 illustrates an implementation of an electronic root meta-actor using micro-parallel architecture according to certain aspects of the invention
  • Figure 31A illustrates an implementation of an integrated personal computing system employing the micro-parallel architecture illustrated in Figure 30
  • Figure 31 B illustrates an implementation of an operating system to control the integrated personal computing system illustrated in Figure 31A
  • Figure 32A illustrates a translation of a meta-actor statically onto conventional hardware according to certain aspects of the invention
  • Figure 32B illustrates the placing of a virtual elaboration machine onto conventional hardware, according to certain aspects of the invention
  • Figure 33 illustrates the components of a designed self-adaptive system according to aspects of the invention
  • Figure 34 illustrates the Figure 33 system with further recursion
  • Figure 35 illustrates the system of Figure 34 with still further recursion
  • Figure 36 illustrates a methodology according to the invention utilising root meta- actors.
  • an active system in which the invention is embodied is illustrated.
  • an actor 10-1 is able to effect action 10-4 in a subject system 10-2.
  • the actor 10-1 and the subject system 10-2 exist in an environment 10-3 which can impact the subject system 10-2.
  • Neither the actor 10-1 nor the subject system 10-2 has any control over the environment 10-3.
  • the actor 10-1 is elaborated in Figure HA.
  • the actor 10-1 includes a model 11-1 and a processor 11-2.
  • the processor 11-2 is guided 11-5 by the model 11-1.
  • the processor 11-2 is arranged to effect action 11-4 in the subject system 10-2. Since the processor 11-2 forms part of the actor 10-1, the effecting 11-4 of action in the subject system 10-2 is the same as the effecting action 10-4 of Figure 10.
  • the subject system 10-2 is known 11-3 by the model 11-1, forming part of the actor 10-1, which completes a loop including the model 11-1, the processor 11-2 and the subject system 10-2. This allows the actor 10-1 to be guided in its action 11-4 on the subject system 10-2 by the model 11-1 of the subject system 10- 2.
  • the actor 10-1 can be described as a "model-based actor". Events can occur in the subject system 10-2 either through the actions of the actor 10-1, as guided by the model 11-1, or through actions of other actors (not shown) which also act on the subject system 10-2, or through a change in state of the subject system itself (e.g. the progression of a chemical reaction) or its environment 10-3 (e.g. the passage of time).
  • the actor 10-1 keeps the model 11-1 updated with its own actions.
  • the processor 11-2 is processing according to the model 11- 1, it updates the model 11-1 with intermediate actions and with the actions it effects in the subject system 10-2.
  • the actor 10-1 is capable of sensing events i.e. changes in the states of objects, in the subject system 10-2 which are caused by the actions of other actors, or by changes in the state of the subject system itself or its environment 10-3.
  • the subject system 10-2 "is known to" the model 11-1 in three ways, via three different routes.
  • a first route 11-6 is via a modeller 11-7, who identifies the subject system 10-2 and builds the model 11-1 of it in terms of the objects, events, actions, and so on which he or she considers significant. This is the mechanism by which the rules which will guide the processor 11-2 in its actions are established in the model 11-1.
  • the modeller 11-7 is shown in Figure HB as being in the environment 10-3 of the system and separate to the actor 10-1. However, in some aspects of the invention, the modeller 11-7 is part of the actor 10-1.
  • a second route 11-8 is via the processor which uses sensors 11-9 to detect events or changes in state of objects in the subject system 10-2.
  • the processor 11-2 updates via route 11-10 the model with the current state of the subject system 10-2 and with events it has recognized in the subject system..
  • a third route 11-12 is the update of the model 11-1 by the processor 11-2 directly, in particular to update the model 11-1 with outcomes of actions, including intermediate actions, which the processor 11-2 will effect in the subject system 10-2 via its effectors 11-11.
  • the routes 11-8, 11-10 and 11-12 together are equivalent to the route 11-3 of Figure HA.
  • the model 11-1 is elaborated in Figure 12.
  • the model 11-1 "knows" the subject system 10-2 in the sense that it contains objects 12-3 which represent the significant objects within the subject system 10-2, and has rules which define which changes in the state 12-1 of each object 12-3 should trigger events 12-2 which in turn should cause actions 12-4 to be initiated by the processor 11-2.
  • an event 12- 2 which reflects a change in the state 12-1 of or the creation or deletion of an object 12-3, initiates an action 12-4.
  • Each action 12-4 can effect change in one or more objects 12-3.
  • No action 12-4 can be initiated without an event 12-2.
  • the model 11-1 can therefore be described as an event-driven model.
  • arrows represent references from each object (state, event or action) to other objects (states, events or actions). So, for instance, an arrow 12-5 between action 12-4 and event 12-2 shows that an action 12-4 is initiated by one and only one event 12-2. On the other hand, one event 12-2 could initiate more than one action 12-4.
  • a double-headed arrow 12-8 between action 12-4 and object 12-3 indicates that each action 12-4 can effect a change in more than one object 12-3, and that each object 12-3 can be changed by more than one action 12-4.
  • An arrow 12-9 shows that a state 12-1 is relevant to only one object 12-3.
  • An event 12-2 is an instantaneous change in the state 12-1 of an object 12-3, from one state, shown by the arrow 12-6, to another state, shown by the arrow 12-7.
  • a creation or deletion of an object 12-3 is considered as a change in state between existing and non- existing.
  • the outcomes of actions may include the creation, modification or deletion of objects within the model. Each of these outcomes can be considered to be a change in the state of an object. Instead of objects being created or destroyed, they may be activated or deactivated respectively, which could result in easier object handling.
  • the processor 11-2 is elaborated in Figure 13.
  • an event 12-2 is responded to by initiating one or more actions 12-4 which are dependent on it.
  • the actions 12-4 which are dependent on an event 12-2 are defined by the model 11-1.
  • Each action 12-4 can change one or more objects 12-3.
  • the changing of an object 12-3 can trigger one or more further events 12-2. This can be termed an event execution cycle.
  • the event execution cycle continues until no new events 12-2 are generated by actions 12-4 or by changes to objects 12-3 (e.g. from events in the environment 10-3 ) and all events 12-2 have been processed to completion.
  • FIG. 14 An event-driven control model, such as might be used in the automated control of an engine dynamometer, for example, illustrating an application of the system of Figures 11 to 13 is shown in Figure 14.
  • Figure 22A is a key to Figure 14.
  • execution begins with the creation of the six objects which are shown on the left-hand side of the drawing.
  • 'accel' is used as a short form for 'acceleration'.
  • the creation of each of the objects labelled accel_rate 14-1, accel_period 14-12, engine_speed 14-3 and last_accel_time 14-4 triggers an event which in turn initiates an action, in this case to set an initial value for each of these objects.
  • a first action 14-6 calculates a value for accel_multiplier 14-2.
  • the completion of the action 14-6 initiates a further action 14-7 which is to assign a time of 01:00 to a current_time object 14-8.
  • the assignment of a value to the current_time object 14-8 triggers a change event 14-20, as described below.
  • the completion of this time assigning action 14-7 is also one of two possible events which can then initiate an action 14-9 which compares the value given by the current_time object 14-8 with the final time of the model, which in this example is set as 72:00 (i.e. 72 hours), applied as an input of the action 14-9.
  • the model continues by incrementing the current_time 14-8 by one hour at action 14-10.
  • the completion of the incrementing action 14-10 is the second of the events which may initiate the comparison by the action 14-9 of the value of the current_time object 14-8 with the value of the final time.
  • the value of an accel_period object 14-21 here is initially set at 06:00 (i.e. 6 hours) by action 14-14. If the result of the action 14-13 is greater than the value of the accel_period object 14-21, two parallel actions 14-15, 14-16 are initiated. The first 14-15 of these actions sets the value of the last_accel_time object 14-4 to the value of the current_time object 14-8, and the second 14-16 of these actions calculates a new engine_speed using the value of the accel_multiplier object 14-2 which was calculated at the beginning of the model.
  • event 14-17 is triggered which, in turn initiates action 14-18, which prints the values of the current_time and engine_speed objects 14-8, 14-3; such values would also be available to any automated control system to which the model might be connected.
  • action 14-18 prints the values of the current_time and engine_speed objects 14-8, 14-3; such values would also be available to any automated control system to which the model might be connected.
  • Figure 15 illustrates a root meta-actor 15-0 acting on a subject system 10-2 within an environment 10-3.
  • the root meta-actor 15-0 is an extension of the actor 10-1 shown in Figures 11 to 13.
  • the model 11-1 guides 11-5 the processor 11-2.
  • the model 11-1 itself is known by a meta-model ( M M) 15-1, which may be included in and form part of the model 11-1. Accordingly, the model 11-1 can be considered as including a model (the meta- model 15-1) of itself.
  • the meta-model 15-1 is a model of a model (the model 11-1); i.e., it is a model whose subject system is another model (the model 11-1).
  • the meta-model 15-1 guides a meta-processor ( M P) 15-2, which may be included in and form part of the processor 11-2.
  • the meta-processor 15-2 effects action in the model 11-1.
  • the meta-processor 15-2 may alternatively be external to the processor 11-2.
  • the meta-model 15-1 shapes the structure of the model 11-1 by defining the types of objects it can contain and the way in which these types of objects can be related.
  • the meta-model 15-1 also shapes behaviour within the model 11-1 by defining the valid combinations of events and actions which can be employed in effecting action within the model 11-1. Control of the model 11-1 by the meta-model 15-1 is exerted via the meta-processor 15-2, through which all action in the model 11-1 is effected. Consequently, the meta-processor 15-2 may also be termed a "model adapter".
  • the meta-model 15-1 and the meta-processor 15-2 lie outside the model 11-1 and the processor 11-2 respectively, no action or change can be effected within the meta-model 15-1 itself.
  • the meta-model 15-1 is part of the model 11-1 and the meta-processor 15-2 is one component or function of the processor 11-2, as is shown, the meta-model 15-1 can be adapted by the processor 11-2.
  • the model 11-1 reflectively contains a model (the model 15-1) of itself and, consequently, the structure of the model 11-1 and the behaviour of the processor 11-2 (as guided by the model 11-1) can be changed while the root meta-actor 15-0 is operating. This contrasts notably with the Meta-Object Framework and Metaobject Protocol approaches described in the prior art above.
  • the subject system of a meta-actor 15-2, 15-1 is a system of one or more models 11-1.
  • the meta-processor 15-2 also acts as the modeller for the actor 11-1, 11-2 in modelling the primary subject system 10-2.
  • the meta-modeller for the meta-actor 15-2, 15-1 is external to the meta-actor 15-2, 15-1 (and the actor 11-1, 11-2) if the meta-model 15-1 is not part of the model 11-1. However, if the meta-model 15-1 is part of the model 11-1, the meta-processor 15-2 takes on the role of meta-modeller as well as modeller.
  • the processor 11-2 is known by a root meta-model ( R M) 15-3, which may form part of the meta-model 15-1, in the sense that the root meta-model 15-3 is a generalised model for model execution shared by processors 11-2 of the same class.
  • the root meta-model 15-3 can also be considered as a generalised model of the meta-model 15-1, and, hence, also of the model 11-1; all objects in the meta-model 15-1 and model 11-1 are instances or specialisations of objects in the root meta-model 15-3.
  • the root meta-processor 15-4 effects action in the processor 11-2 and, if it is included in the processor 11-2, the meta-processor 15-2, in accordance with and under the guidance of the root meta-model 15-3.
  • the processor 11-2 is responsive to an event to initiate in the subject system 10-2 the actions which are given by the model 11-1.
  • the meta-processor 15-2 is responsive to an event to initiate in the model 11-1 the actions which are given by the meta-model 15-1.
  • the root meta-processor 15-4 is responsive to an event to initiate in the processor 11-2 the actions which are given by the root meta-model 15-3. Therefore, the root meta-processor 15-4 can take on the function of any processor (or meta-processor), which in turn can be guided by any model (or meta- model) from the class which the generalised root meta-model 15-3 represents.
  • the root meta-model 15-3 is a generalised version of all meta-models 15-1 of a particular class. For instance, it could be a generalised version of the event-driven model described below with reference to Figure 16. As such, the root meta-model 15-3 defines the valid structures and behaviours which can be embodied in any meta-model 15-1, and hence any model 11-1.
  • the events which initiate action in the root meta-processor 15-4 on behalf of the processor 11-2 are generated in the subject system 10-2, or in the environment 10-3.
  • the events which initiate action in the root meta-processor 15-4 on behalf of the meta-processor 15-2 are generated in the model 11-1 (which is the subject system of the meta-model 15-1), or in the environment 10-3.
  • An event may not give rise to any actions, or it may give rise to one, two or more actions. Also, each initiated action will typically give rise to one or more object state changes, each of which may in turn give rise to one or more further events. Each of these events then is processed by the appropriate one of the processor 11- 2, the meta-processor 15-2 and the root meta-processor 15-4, and the process continues until no further events are generated.
  • the root meta-model 15-3 lies outside the meta-model 15-1 and the model 11-1, the root meta-model itself cannot be adapted, in effect "hard wiring" the structure and behaviour of the root meta-processor 15-4. If, however, the root meta-model 15-3 is defined as part of the meta-model 15-1 within a model 11-1, as is shown, the root meta-processor 15-4 can adapt the root meta-model 15-3 through the meta- processor 15-2 (whose functions the meta-processor 15-2 has taken on), thereby freeing its structure and behaviour to adapt.
  • the subject system of a root meta-actor 15-3, 15-4 is a generic model execution system.
  • Figure 16 shows a root meta-execution cycle.
  • the cycle includes an action 12-4 which is driven 16-1 by an event 12-2. Actions 12-4 can result in changes 16-2 in objects 12-3, which in turn can result 16- 3 in events 12-2. This level of the cycle is illustrated in Figure 13, as mentioned above.
  • the root meta-execution cycle of Figure 16 is as follows.
  • the root meta-processor 15-4 examines 16-4 its definition 16-5 of the event to determine 16-6 which action definitions 16-7 are associated with, and which actions should therefore be initiated 16-1 by, the event 12-2. These actions 12-4 are then initiated 16-8.
  • the root meta-processor 15-4 uses 16-9 the definition of the action 16-7 to determine 16-15 which objects' definitions 16-10 are affected, and then initiates 16-11 changes 16-2 in the associated objects 12-3 as prescribed.
  • the root meta-processor 15-4 examines 16-12 the definition of the object 16-10 to find out whether the change in the object should trigger 16-13 any further event definitions 16-5.
  • the further events 12-2 are triggered 16-3, thereby initiating a further root meta-execution cycle for the newly triggered event.
  • the changing of an object 12-3 is constituted by a change in the state of the object 12-3, and the determination of what events are to be triggered by the changing of a state of the object includes examining a definition 16-10 of the object's change of state. Actions may also or instead result in the creation or deletion of objects 12-3, which are special types of changes in the state. Whether or not objects 12-3 are to be created or deleted is determined by the action definition 16-7.
  • Figure 16 represents a "fine-grained" event-driven execution cycle, as opposed to the “coarse-grained” event-driven processing of OO GUIs and DBMS triggers described in the prior art above. Every single action, down to the Elementary Actions which are equivalent to instructions in conventional software (as described with reference to Figure 17 below), is initiated in response to an event recognised as a change in state of an object, including any object which represents another action. This makes this aspect of the invention considerably more amenable to parallel computing architectures, particularly those described below with reference to Figures 29 and 30.
  • Figures 17A and 17B shows the structural components of a root meta-model 15-3.
  • This is a simplified model (i.e. it leaves out some behavioural details, shown in Figure 18) of the key object types required to describe a model, or a meta-model, and the key relationships between them. It is most easily understood by breaking it down into four quadrants resulting from the overlaps of two dimensions.
  • the object type is that of an Archetype or an Instance, and in a second dimension it can be a Structural or a Behavioural component of the model.
  • FIG. 17A This gives four quadrants, namely Structural Archetypes and Behavioural Archetypes, shown in Figure 17A, and Structural Instances and Behavioural Instances, shown in Figure 17B.
  • Figures 22A and 22B provide a key to Figures 17A and 17B.
  • Each Object Type represents a group of objects with the same properties, or attributes, and behaviours. All the Object Types in this model are also instances of Objects within the model; that is, all Object Types are sub-types of the Object object type. Therefore, the Object Type "Object" covers all four quadrants. Every Object is a member of at least one Finite Collection.
  • the Object Type collection being a sub-collection of the Object collection, which is in turn an instance of the Object Type collection.
  • Structural Instances are illustrated in Figure 17B. All Structural Instances are either simple or complex. The simple objects in such a model have only one element and are either Value Objects 17-1 or Reference Objects 17-2. Value Objects 17-1 are those such as integers or dates whose content at any point in time is simply a single occurrence of a well-defined, though typically infinite, set. Reference Objects 17-2, on the other hand, point at some other object. Complex objects have more than one element 17-4 and are either Composite Objects 17-3 or Collections 17-6. Composite Objects 17-3 are composed of one or more other Objects, typically of different types; composite objects can be thought of as similar to (possibly variable) record structures from conventional high level programming languages.
  • a System 17-5 is a special kind of Composite Object 17-3 which contains at least two other Objects, at least one of which is or contains a Reference Object 17-2.
  • an actor as defined in this specification, is a special type of system capable of effecting action in another system — i.e. a subject system.
  • Collections 17-6 are groups of Objects of similar types as their name implies. Collections may be Infinite 17-7, Finite 17-8 or Derived collections 17-9. Derived collections 17-9 are defined with reference to other Collections. Infinite Collections 17-7 typically describe infinite groups of values, such as a set of Integers.
  • Finite Collections 17-8 which are typically defined with reference to other Objects within the model, either directly (i.e. explicitly enumerated) or through some formula through which its members can be derived. All the members 17-10 of a Finite Collection 17-8 have a structure and behaviour defined by the Object Type 17-14 associated with that Finite Collection (see below). Derived Collections 17-9, such as Subsets, are neither pre-defined as the basic value sets must be, nor fully enumerated, but have their membership defined through a formula contained within an Applied Action Type 17-24, which is discussed below.
  • FIG. 17A Structural Archetypes are illustrated in Figure 17A. They define the structure (and behaviours — through associations with Behavioural Archetypes — as is discussed below) common to all members of each Collection.
  • a Value Object Type 17-11 identifies the Collection 17-6 (typically an infinite set such as integers) from which the values for all Value Objects 17-5 of that type must take.
  • a Value Object type 17-11 may also include a reference to an Action Type 17-16 (as discussed below) through which the specific value of a Value Object 17-1 is derived.
  • a Reference Object Type 17-12 also includes a reference to a Collection 17-6, indicating which Collection 17-6 members of this type are allowed to reference.
  • Composite Object Types 17-13 identify the Object Types 17-14 which will provide the components of Composite Objects 17-3 of this type.
  • Collection Types 17-15 which define the archetypal member of a collection of collections, identify the Object Type 17-14 from which Collection members 17-10 of this type must be taken.
  • Behavioural Archetypes are also shown in Figure 17A. They introduce the mechanisms by which actions are effected, by associating Event Types 17-32 and Action Types 17-16 with Objects. Since Object Types 17-14 are also Objects, such associations can be with Object Types 17-14 as well as specific Objects, allowing behaviours to be associated with the entire Collections 17-6 of which an Object Type 17-14 is the Archetype.
  • Event Types 17-32 are either Elementary 17-17 or Composite 17-18.
  • Elementary Event Types 17-17 define a change in the state of an Object into a State, from another State or by the creation of an object, each State being defined with reference to a State Object 17-19.
  • a State Object 17-19 is a special form of Value Object 17-20.
  • Composite Event Types 17-18 combine other Event Types 17-32 (either Elementary or Composite) to define new Event Types.
  • the way in which these Event Types 17-32 are combined is defined via an Action Type 17-16, typically Union (OR) or Intersection (AND) Action Types.
  • Action Types 17-16 can similarly be Elementary or Composite 17-21.
  • the Elementary Action Type is not shown in the Figure as there are no relationships which apply to it and to no other form of Action Type.
  • Action Types 17-16 have Determinants 17- 22, which must be available for the execution of Actions of this Action Type.
  • Action Types 17-16 also have Consequents 17-23, which accept the outcomes of Actions of this Action Type.
  • Determinants 17-22 and Consequents 17-23 of Action Types 17-16 may be either Objects or Object Types 17-14.
  • Applied Action Types 17-24 are specific instances of Action Types 17-16, associated with a specific Event Type 17-32. To illustrate the difference between an Action Type 17-16 and an Applied Action Type 17-24, we can consider the action "add” (i.e. binary addition - the mathematical operator "+”). The Action Type "add” has two Determinants and a single Consequent, each of which is a Number. An Applied Action Type can be seen in Figure 14.
  • an Applied Action Type "add” is initiated at action 14-10 which adds 1 to the "current_date” (the Determinants) and assigns the result to "current_date” (the Consequent).
  • Behavioural Instances also are illustrated by Figure 17B. They are employed directly by the root meta-processor 15-4 to drive the execution of a model 11-1, as described above with reference to Figure 16.
  • the key Behavioural instance Object Types 17-4 are Events 17-25, which may be Elementary 17-26 or Composite 17-27, and Actions 17-28, which may also be Composite 17-29. Actions 17-28 and Composite Actions 17-29 have Determinants 17-30 and Consequents 17-31. Events 17-25 are occurrences of Event Types 17-32. Actions 17-28 are occurrences of Applied Action Types 17-24.
  • this root meta-model 15-3 differs from other modelling approaches, particularly Object Oriented Modelling.
  • the meta-objects i.e. the Object Types 17-14
  • the mechanisms for capturing behaviours are not contained in, ultimately sequential, methods or operations but instead in Event Types 17-32 and Action Types 17-16. This directly enables dynamic, parallel execution of the model.
  • Figure 18 shows the behavioural components of a root meta-model such as the root meta-model 15-3.
  • the left hand side of the model (shown in Figure 18A and 18B) is a partial but more detailed version of the structural model shown in Figure 17. It explicitly includes the collections (on the left of the dotted line in the Figure) which are represented as types in the previous model, as well as the archetypal members of each set (on the right side of the dotted line in the Figure). It also includes special types of reference arrows indicating membership, sub-collection and composition relationships.
  • Figures 22A and 22B provide a key also to Figure 18.
  • the key behavioural elements of the model are shown to the right hand side of the model (in Figures 18C and 18D) in three groups associated with Object 18-1, Event 18-2 and Action 18-3 events respectively.
  • the Object events group 18-1 shows three different Event Types which may be associated with an Object: a Creation Event Type 18-4, which type of event occurs when an Object is first created by an Action (i.e. changes its State from "Does Not Exist” to "Exists”); a Modification Event Type 18-5, which type of event occurs when the Object is changed (but not created or deleted) by an Action; and a Deletion Event Type 18-6, which type of event occurs when an Action deletes the Object.
  • this model indicates that it should respond by creating a Creation Event 18-7, a Modification Event 18-8 or a Deletion Event 18-9 respectively, the Event being associated with the changed Object.
  • the Event events group 18-2 shows the behavioural rules associated with Events. Only the creation of an Event is of interest.
  • the creation of an Event initiates two parallel Actions 18-9, 18-10.
  • the first is a Composite Action 18-9, which creates a parent Event for each occurrence of a Composite Event Type of which the Event Type associated with this Event is a component. This parent Event is created only if "Execution" of the Composite Action associated with the Composite Event Type indicates that all other necessary Events have also occurred (i.e. completes in a state of "True").
  • the second Action 18-10 creates an Action occurrence for each Action Type which is initiated by the Event Type associated with this Event.
  • a Create action 18-11, forming part of the Create Action composite action 18-10 handles the creation of the created Action components, i.e.
  • Actions 18-9, 18-10 can create multiple Events 17-25 or Actions respectively. Whether or not multiple Events 17-25 or Actions 17-28 are created depends on the number of Composite Event Types 17-18 and Action Types 17-16 associated with the Event Type 17-32 created.
  • the Action events group 18-3 shows the behavioural rules associated with Actions 17-28. It is the creation of the Action which is of primary concern, and the creation of an Action can initiate two parallel activities. One activity, which occurs every time an Action is initiated, is to create an "Initiated" Event 18-12. This allows the root meta-processor 15-4 to track execution of Actions and, more importantly, enables the Actions which are components of a Composite Action and which are dependent on its initiation to be initiated. An Execute Action activity 18-13 is initiated only when the initiated Action is an Elementary Action. Only Elementary Actions actually cause changes to Objects. This Action 18-13 causes the processor to perform the created Elementary Action using the Action Type, Determinants and Consequents referenced by the associated Action Type.
  • any changes effected by assigning the outcomes of such Elementary Actions to the Consequents initiate another cycle of execution, since the root meta-processor 15-4 detects the change and creates a Modification Event, as described above.
  • the root meta-processor 15-4 initiates a Create Completed Event action 18-14, which creates an Action Completed Event and which in turn may initiate other Actions.
  • These other Actions typically are components of the same Composite Action.
  • Figures 19A and 19B show the above-described system of operation applied to a production assembly model.
  • Figures 22A and 22B also provide a key to Figures 19A and 19B.
  • objects labelled "Customers” 19-1, “ProductSpecs” 19-2 and “PartSpecs” 19-3, relating to customers, product specifications and part specification respectively are created prior to execution of the model.
  • Execution of the model is initiated by the creation of an "Order" object 19-4, which orders a specific ProductSpec 19-5 for a specific Customer 19-6.
  • the creation of the Order object 19-4 is an event which initiates a "BuildProduct” action 19-7.
  • there are plural objects called Customer, ProductSpec and PartSpec although only one of each of these is illustrated in the Figure.
  • Initiation of the BuildProduct action 19-7 initiates an "ObtainParts" action 19-8.
  • the Parts 19-9 to be obtained by the ObtainParts action 19-8 are those defined in a PartSpecList object 19-10 associated with the ProductSpec object 19-5.
  • Initiation of the ObtainParts action 19-8 itself initiates one occurrence of an ObtainPart action 19-11 for each part in the
  • PartSpecList 19-20 associated with the ProductSpec object 19-5.
  • Each ObtainPart action 19-11 commences by initiating a check (19-12) as to whether the part requested is an assembled part. If it is, action 19-12 completes in a True state 19-21 which initiates a further ObtainParts action 19-13 for the assembly of the requested part, just as the BuildProduct action 19-7 initiates an ObtainParts Action 19-8 for the finished product. Once all of the parts are available the completion of the ObtainParts action 19-13 causes 19-22 the AssemblePart action 19-14 to assemble the part itself, which constitutes either the creation of an object or the change in state of object.
  • action 19-12 completes in a False state 19-23, initiating action 19-15 to determine whether it is a fabricated part. If it is, event 19- 24 initiates a FabricatePart action 19-16, which fabricates the part. Otherwise, a BuyPart action 19-17, which buys the part, is initiated by event 19-25.
  • FIG. 20 A complex activity system is illustrated in Figure 20.
  • a subject system 10-2 is contained within an environment 10-3, in the same way as that illustrated in Figure 10.
  • an actor 10-1 includes first and second sub-actors 20-1, 20-2.
  • the sub-actors 20-1, 20-2 are interconnected, and each effects action 20-3, 20-4 in the subject system 10-2.
  • the sub-actors 20-1, 20-2 can be considered as co-operating with each other to effect action 20-3, 20-4 in the subject system 10-2.
  • a partial composite root meta-processor 15-4 such as the one shown in Figure 15, is illustrated in Figure 21.
  • the root meta-processor 15-4 is formed of a system of cooperating sub-actors 21-1, 21-2, 21-3.
  • An activator actor 21-1 is concerned with events
  • an executor actor 21-2 is concerned with actions
  • a recorder actor 21-3 is concerned with objects.
  • Each of the actors 21-1, 21-2, 21-3 is connected to each of the other actors, in order to enable the cycle shown in and described with reference to Figure 16 to be performed. (Note that a complete composite root meta-processor requires further components as discussed below with reference to Figure 23. A complete composite root meta-processor is therefore described below with reference to Figure 24.)
  • the allocation of tasks between the sub-actors 21-1, 21-2, 21-3 is as follows.
  • the activator 21-1 is arranged to respond to events and to determine from the events and from the model or models 11-1, 15-1, 15-3 which actions to initiate.
  • the executor actor 21-2 is arranged for effecting the actions which are determined by the activator 21-1 as being required to be initiated.
  • the executor 21-2 effects the actions so determined on the objects in accordance with the model or models 11-1, 15-1, 15-3.
  • the recorder 21-3 manages the objects forming part of the models and recognises events triggered by the changes of states of objects managed by the recorder 21-3, or the creation or deletion of an object, which in turn initiates further activity by the activator 21-1.
  • Figure 18 shows the key responsibilities of the recorder 21-3, activator 21-1 and executor 21-2 actors shown in Figure 21 for the behavioural rules described above with reference to Figure 18.
  • the recorder 21-3 has responsibility for the Object events group 18-1, i.e. for detecting and creating Object Creation 18-7, Modification 18-8 and Deletion 18-9 Events.
  • the activator 21-1 has responsibility for the Event events group 18-2, i.e. for detecting Event Creation Events 18-7 and creating the associated Parent (Composite) Events and Actions 18-9, 18-10.
  • the executor 21-2 has the responsibility for the Action events group 18-3, i.e. for detecting the Action Creation Events 18-10 and executing such actions, together with creating the associated Action Initiated and Action Completed Events 18-12, 18-14.
  • the communication can take either a continuous or a discrete form.
  • the connection of the first and second sub-actors 20-1, 20-2 is illustrated in Figure 23.
  • the first sub-actor 20-1 includes a connection to a channel 23-1, which also is connected to the second sub-actor 20-2.
  • the sub-actors 20-1, 20-2 are thus connected to each other by the channel 23-1.
  • a first interface component 23-2 is included in the first sub-actor 20- 1 , the first interface component 23-2 being connected to a second interface component 23-3 forming part of the channel 23-2.
  • a third interface component 23-4 forming part of the channel 23-1 is connected to a fourth interface component 23-5, which forms part of the second sub-actor 20-2.
  • Each of the interface components 23-2, 23-3, 23-4 and 23-5 constitutes an actor which preferably is constituted as the actor 10-1 of Figure 15.
  • Each of the actors 23-2, 23- 3, 23-4 and 23-5 has as its subject system 10-2 the article which constitutes the communication between the components.
  • the communication articles may take the form of material flow, for example, piped fluids or powders, or of signals, for example electro-magnetic waveforms or electronic waveforms.
  • Discrete communication articles may constitute packages, for example the shipment of manufacturing components or the like, or messages, for example information packages, such as orders.
  • Each of the actors therefore includes a model of the article which is used for communication, and effects action on that article.
  • FIG. 24 A complete generic model for a composite root meta-processor is illustrated in Figure 24.
  • the activator 21-1, the executor 21-2 and the recorder 21-3 are shown as being connected to one another by a channel 23-1.
  • the channel 23-1 is also connected to an interface 24-2, which allows the root meta-processor 15-4 to be connected to external systems, for example by the channel 24-3.
  • the channel 23-1 may connect the activator 21-1, the executor 21-2, the recorder 21-3, and the interface 24-2 to each other directly, as shown.
  • the channel system may instead take any form between these two extremes.
  • Each of the sub-actors 21-1, 21-2, 21-3, 23-1 or 24-2 may itself be constituted by a system of co-operating activators, executors, recorders, channels and interfaces. This is illustrated in Figure 25.
  • the activator 21-1 is shown including an executor 25-1, a recorder 25-2 and an activator 25-3, all of which are connected together by a channel 25-4.
  • An activator interface 25-5 connects the activator 21-1 to the channel 23-1.
  • the channel 23-1 itself comprises a root meta- processor constituted by a recorder 25-6 an executor 25-7 an activator 25-8 and a channel 25-9.
  • the channel 25-9 forming part of the channel 23-1 is connected to each of the executor 21-2, the recorder 21-3, the activator 21-1 and the interface 24-
  • the executor 21-2, the recorder 21-3, the interface 24-2 and the external channel 24-3 also are constituted by root meta-processors including the relevant components.
  • a ring-like channel system, or a hybrid channel system may be used as described above with reference to Figure 24.
  • each of the components of the root meta-processor 15-4 is illustrated as a root meta-processor, it is not necessary that each of the components is so constituted. In some circumstances, it may be required to implement only one, two or three of the components as root meta-processors.
  • the subject system of an activator 21-1 is a system of events which initiate actions in a subject system.
  • the subject system of an executor 21-2 is a system of actions which change objects within a subject system.
  • the subject system of a channel 23-1, 24-3 is a system of communication between two or more actors.
  • the subject system of an interface 24-2 is a system of communication between an actor and a channel or the outside world.
  • a model of layered root meta-actors is shown in Figure 26.
  • a physical actor 26-1 is illustrated comprising a root meta-processor 15-4 and a model 11-1.
  • the model 11-1 includes a meta-model 15-1 and a root meta-model 15-3.
  • the root meta-processor 15-4 is constructed like the root meta-processor of Figure 24, namely including an activator 21-1, an executor 21-2, a recorder 21-3, a channel 23- 1 and an interface 24-2.
  • the model 11-1 contains sub-models 26-20, 26-21, 26-22 which respectively elaborate the behaviours of the virtual activators, executors, recorders, channels and interfaces of the virtual root meta-processors 26-3, 26-4 and 26-5, in particular the virtual activators, executors, recorders, channels and interfaces thereof, into terms which are directly executable by the physical root meta-processor 15-4.
  • model - and hence meta-model and root meta-model - of any root meta-actor is in fact stored within the recorder of the root meta-actor's root meta- processor.
  • the second virtual actor 26-7 contains a sub-model 26-23 within the model 26-10 which in turn elaborates the root meta-model of the fourth virtual actor 26-12 in terms which render it executable by the virtual root meta-processor 26-4 of the second virtual actor 26-7.
  • the physical actor 26-1 is the only hardware element; all the other elements are virtual.
  • the root meta-processors 26-3, 26-4 and 26-5 can be thought of as virtual model execution engines.
  • a single actor can support multiple virtual actors, each of the virtual actors having a respective role or roles, provided that the elaboration model of the single actor recognises each of the roles of the virtual actors.
  • its elaboration model requires additional rules for handling contention between the roles. Such rules might be definitions of the relative priority of each role.
  • Applying an actor having multiple roles to a situation in which a production control computer system is used for manufacturing an actor can have a role of monitoring production and a role of raising alarms for production management should any production issues occur, for example, falling behind schedule.
  • the actor can also have the role of running a simulation of a proposed change to the production process, in order to enable the performance of the existing process to be compared to that of the proposed process. Obviously, keeping production running is more important and more urgent than completing the simulation quickly.
  • the computer system (not shown) is a single composite actor having two assigned roles, namely production monitor and simulator.
  • the hardware of the computer system is provided with an elaboration model, which needs to be aware of both roles and needs to include rules for handling any potential conflict between the roles. If both the simulator and the production monitor want to do something simultaneously, the production monitor role of the actor has a higher priority and therefore takes precedence over the simulation role.
  • the multi-actor elaboration model therefore needs two individual models, one for each role, contained within a single, common model.
  • the single model also has rules to handle the inter-role interactions.
  • This approach differs from similar existing layered approaches (for example, in the design of modern layered operating systems) in that it is not the model of the virtual actor which is elaborated onto the physical actor, but instead is the virtual root meta-model.
  • This offers the ability to execute any model of the virtual actor capable of being executed by the virtual root meta-processor on the physical actor without requiring modification of the physical actor itself, significantly increasing the reusability of the elaboration model.
  • the subject system of an elaborator 11-1 is a system of elaboration of the behaviour of a processor device 26-3, 26-4, 26-5, 26-13 comprising a processor, meta- processor or root meta-processor of an actor 26-6, 26-7, 26-8 enabling direct execution by another processor, meta-processor or root meta-processor 15-4.
  • an elaboration model 27-0 will now be described with reference to Figures 27A and 27B.
  • part of an application model 27-1 includes an event 27-2, which initiates a multiplication process to multiply two numbers m and n to produce an output number r.
  • the numbers m and n are placed in the eight-bit address2 object 27-9 and eight-bit address4 object 27-8 respectively before the multiplication process begins.
  • This elaboration model has an event e' 27-4, which is initiated whenever the event e 27-2 is initiated in the application model 27-1.
  • the event e' 27-4 initiates a composite action 27-3. This in turn initiates action 27-5 to assign the value of zero to an eight-bit result object address ⁇ 27-7.
  • Completion of the ADD action 27-5 initiates eight parallel sequences of actions, one sequence for each bit of the address4 object 27-8.
  • Each of the parallel sequences indicated at 27-10 tests whether or not the corresponding bit of the address4 object 27-8 is set and, if it is, rotates the value of the further address2 object 27-9 (which is the number m) by a number of bits specified at a corresponding ROR rotation input using ROR action 27-12, and adds the result to the value of the address ⁇ object 27-7.
  • ADD actions 27-13 are initiated simply by the completion of the preceding ROR actions 27-12.
  • the ROR actions 27-12 are initiated by the associated BIT action 27-10 having completed in a state of "True".
  • the result of the elaboration process is a value stored in the address ⁇ object 27-7 which is equal to the binary number of the third address object multiplied by the binary number of the second address4 object 27-8.
  • this model is executable in a small fraction of the time of the equivalent multiplication algorithm utilising a conventional Von Neumann type computer.
  • the elaborator of Figure 27 is merely an example, and the exact form of the elaborator will depend particularly on the function that it needs to perform.
  • the composite root meta-actor model described above can be used to create a single processor, multiprocessor, (potentially massively) micro-parallel and distributed processor computer architectures. This approach breaks the dependency on the program counter which is inherent in traditional Von Neumann type computers, and enables fine-grained parallel implementations which closely reflect the semantics of the systems which the implementations are intended to represent.
  • a root meta-actor based computing device utilising a single processor is illustrated in Figure 28.
  • the computing device comprises an activator 21-1, an executor 21-2, a recorder 21-3 and interfaces 24-2 all connected to each other by a channel 23-1.
  • the computing device thus is constructed according to the scheme used for the root meta-processor of Figure 24.
  • the activator 21-1 includes an event queue register (EQR) 28-1, which contains the reference of the next entry in the event queue. A reference may be considered as being similar to an address as used in conventional computing devices. This allows plural events to be awaiting processing simultaneously.
  • the activator 21-1 also includes an event register (ER) 28-2 which contains a reference of the current event, which is the event which is being currently processed.
  • the activator 21-1 also includes an event type register (ETR) 28-3 which contains the physical reference of the current event type.
  • the executor 21-2 similarly includes an action queue register (AQR) 28-4, an action register (AR) 28-5 and an action type register (ATR) 28-6.
  • the action queue register (AQR) 28-4, an action register (AR) 28-5 and an action type register (ATR) 28-6.
  • the action register 28-5 contains the address of the current action which is being processed.
  • the action type register 28-6 contains the operation code of the current action type, which is to be considered as being similar to a conventional instruction register.
  • the executor 21-2 also includes first to nth general purposes registers (GPR) 28-7 to 28-8. These registers 28-7, 28-8 contain parameters which are used in processing.
  • the executor 21-2 optionally further includes an instruction decoder (ID) 28-9 and an arithmetic and logical unit (ALU) 28-10, which are conventional in microprocessor architecture.
  • ID instruction decoder
  • ALU arithmetic and logical unit
  • the recorder 21-3 includes an object queue register (OQR) 21-11, which contains the reference of the next entry in the object queue. Also included are an object register (OR) 21-12, which contains a reference of the current object being processed, and an object type register (OTR) 21-13 which contains the current object's type. The object's type is used primarily to distinguish between objects, actions and events.
  • the recorder 21-3 also includes a memory data register (MDR) 21-14 which is used to store data being recorded to or fetched from memory, and a memory access register (MAR) 21-15, which contains the reference at which store or fetch instructions to be actioned are held.
  • An object store or memory (OS) 21-16 forms part of the recorder 21-3.
  • access to all objects in the OS 21-16 is managed by the Recorder 21-3. Whenever access to an object is required, the object's reference is placed on the object queue. The Recorder's OQR 21-11 points at the next item in the object queue to be processed. When the object queue is not empty and the Recorder 21-3 is ready, it obtains the reference from the object queue and places it in the Object Register (OR) 21-12.
  • the Recorder 21-3 then places the OR into the MAR 21-15 and issues a read or write instruction to the OS 21-16; in the case of a read, the Recorder 21-3 obtains the object from the OS location specified in the MAR 21-15 and places it in the MDR 21-14; in the case of a write, the Recorder 21-3 places the object in the MDR 21-14 into the OS 21-16 at the location specified by the MAR 21-15.
  • the OS 21- 16 along with the object is information about the object's type, which is placed in the OTR 21-13. The object's type is then available for further processing as required by the Recorder 21-3, or the Activator 21-1 or the Executor 21-2.
  • the Activator's 21-1 event queue is held in the OS 21-16 within the Recorder 21-3.
  • the triggering actor e.g. the Recorder 21-3 or an Interface 24-3.
  • Events continue to be added to this queue as they are generated.
  • the EQR 28-1 points to the next item in the event queue to be processed. Whenever the event queue is not empty and the Activator 21-1 is ready, it follows the reference in the EQR 28-1 to obtain the next item in the event queue. This includes a reference to the event object held within the Recorder 21-3, which is placed in the ER 28-2.
  • the reference in the ER 28-2 is then used, in turn, to obtain the event's type from the information held about the event object in the Recorder and return this to the ETR 28-3.
  • the event's type is then used to obtain associated action types and parent event types, as described in Figure 18.
  • the Executor's action queue for a single processor root meta-actor based computing device is held in the OS 21-16 within the Recorder 21-3.
  • an action is initiated, it is place in the Executor's action queue by the Activator 21-1. Actions continue to be added to this queue as they are identified.
  • the AQR 28-4 points to the next item in the action queue to be processed. Whenever the action queue is not empty and the Executor 21-2 is ready, it follows the reference in the AQR 28-4 to obtain the next item in the action queue. This will include a reference to the action object held within the Recorder, which is placed in the AR 28-5.
  • the reference in the AR 28-5 is then used, in turn, to obtain the action's type from the information held about the action object in the Recorder 21-3 and return this to the ATR 28-6.
  • the contents of the AR 28-5 and ATR 28-6 are then used to execute the action, as described in Figure 18. More specifically, if the action is an elementary action (i.e. is not composed of further, more detailed actions but can be directly executed by the Executor 21-2 in one step), the action type in the ATR 28-6 has specific circuitry or micro-coding within the ALU 28-10 to implement that action type; in this way type action type in this invention is equivalent to an instruction in a conventional computer and the ATR 28-6 is similarly equivalent to an Instruction Register. Specific information required for the execution of the action, such as the locations of the determinants and consequents in the OS 21-16 are provided via the action object referenced in the AR 28-5, as described in Figures 17 and 18.
  • Activator, Executor and Recorder processing cycles are themselves applications of the root meta-execution cycle described in Figure 16. These must be implemented within the processor configuration by: the further manipulation of the event, action and object queues (using multiple queues or priorities within the queues) to ensure that events, actions and objects being used by the Activation, Execution and Recording cycles themselves are handled in preference to "user" events, actions and objects from the model being executed; the implementation of additional logic circuits designed specifically to execute the
  • the computing device of Figure 28 is in someway similar to the conventional Von Neumann type computer. However, operation is event-driven, rather than flow and jump. Put another way, the computing device of Figure 28 does not require a program counter to process events, actions and objects. Also, the activator 21-1 determines what actions happen and at what times, and is guided by the behavioural model, described above.
  • a root meta-actor based computing device can also be constructed using multiple processors. This is shown in Figure 29.
  • the structure here is the same as that for the computing device of Figure 28, but each of the activator 21-1, executor 21-2 and recorder 21-3 are replaced by (that is, are constituted by) another copy of the computing device illustrated in Figure 28.
  • Each of the activator 21-1, executor 21-2 and recorder 21-3 are connected to the channel 23-1 by a respective interface 29-1, 29-2 and 29-3.
  • Some modifications to this described architecture can be made, namely that a recorder 29-4 included within the activator 21-1 is restricted to event queues 29-5, and a recorder 29-6 contained within the executor 21-2 is restricted to action queues 29-7.
  • an executor 29-8 contained within the activator 21-1 and an executor 29-9 within the recorder 21-3 may optionally be restricted to specialist actions required by those actors.
  • each of the Activator 21-1, Executor 21-2 and Recorder 21-3 explicitly has a dedicated processor with its own Activator, Executor and Recorder, together with internal Channels and Interfaces to the higher level Channel 23-1 connecting all components of the multi-processor device. This allows the design of the Activation, Execution and Recording cycles to be specialised.
  • the Recording cycle within the Activator's Recorder 29-4 can be specialised to dealing with the management of events in an event queue
  • the Execution cycle within the Activator's Executor 29-8 can be specialised to those actions required for identifying and initiating actions and triggering parent events, as described in the model in Figure 18.
  • Figure 30 shows how plural elements can operate together to form a more sophisticated actor.
  • a root meta-actor based computing device 30-0 using micro-parallel processing includes plural activators 21-1, 30-1 and 30-2 connected bi-directionally to the channel 23-1.
  • Each activator 21-1, 30-1 and 30-2 includes a recorder (the activator 21-1 is shown with a recorder 30-3).
  • the recorder 30-3 includes an event queue 30-4.
  • the other activators 30-1 and 30-2 are constructed in the same way.
  • each activator 21-1, 30-1 and 30-2 can include more than one recorder.
  • Plural executors 21-2, 30-4 and 30-5 are connected bi- directionally to the channel 23-1.
  • Each includes a recorder 30-6, and each recorder includes an action event 30-7.
  • Two recorders 21-3, 30-8 are connected bi- directionally to the channel 23-1. Each includes a recorder 30-9, and each recorder includes an object queue 30-10. Each queue 30-3, 30-7 and 30-10 provides events, actions and objects respectively on demand, on a first-in, first-out basis.
  • the channel 23-1 is bi-directionally connected to the interface 24-2. The connection of the various components to be channel 23-1 avoids requiring them to be located at the same place so allows them to be physically distributed.
  • the recorder 30-3 contained within the activator 21-1 includes an event queue 30- 15.
  • the recorders (not shown) in the other activators 30-1, 30-2 do not contain event queues.
  • the recorder 30-6 contained within the executor 21-2 includes an action queue 30-7. None of the recorders (not shown) in the other executors 30-4, 30-5 include an action queue.
  • the recorders in the activators 21-1, 30-1, 30-2 do not contain object or action queues.
  • the recorders in the executors 21-2, 30-4, 30-5 do not contain object or event queues.
  • executor or executors 30-11, 30-12, 30-13 within the activators 21-1, 30-1, 30-2 may be restricted to specialist action required by those actors.
  • parallel processing is enabled by multiple activators, executors and recorders each sharing a common event, action and object queue respectively, enabling activity to be sourced from and processed by multiple processors simultaneously.
  • Multiple action queues 30-7, event queues 30-15, and object queues 30-10 are needed where plural executors 21-2, activators 21-1 and recorders 21-3, respectively are not collaborating.
  • each executor, activator and recorder can have an associated queue, and queues are disabled as required so that only one queue in a set of collaborating executors, activators or recorders is used at a given time.
  • Devices based on the architecture described in Figure 30 may have all their components combined into a single physical housing. Alternatively, collections of components may be physically remote from one another, as in a distributed computer system.
  • the root meta-actor based approach described with reference to Figure 30 allows parallel architectures to be employed, and for them to be exploited for improved performance without any need to adapt the system model from the original problem domain. This is achieved even though the program counter of the conventional Von Neumann type architecture is no longer required, and thus frees the hardware from the sequential/serial paradigm which dominates prior art computer architectures.
  • the root meta-actor in this approach also has advantages over other prior art computer architectures which do not use the Von Neumann approach.
  • concurrency in the root meta-actor based model is derived from the fundamentally event-driven nature of the system model which it implements. This is in contrast to the Concurrent Sequential Process model which remains grounded in the sequential paradigm, i.e., each processor operates on a essentially sequential machine which limits the granularity of the parallelism which it can support. Also, dependencies between actions in the root meta-actor based model are through events which are explicitly defined and dynamically created. This offers greater flexibility and broader application than the prior art Dataflow model. Although the Dataflow model also eliminates the program counter, enabling more granular parallelism, the events which drive actions are implicit and predetermined by a complier, which queues functions to be initiated on receipt of matching tokens.
  • the root meta-actor based model also provides improvement over the Actor model, which initiates actions only in response to the receipt of a message.
  • message receipt is the only type of event which can initiate action in an Actor model, its applicability to real-world, event-driven systems is limited.
  • Root meta-actors which have their meta-models and root meta-models embedded in the event-driven model driving their primary process are capable of supporting well defined and highly adaptable systems, since any of the models can be adapted without recourse to offline, specialist programming, statically translated across semantic gaps. This is in contrast to Object Oriented approaches, for example, in which, at best, meta-models are incomplete, only available to specialist developers and unavailable for adaptation during system operation.
  • Figure 31 A shows how the massively parallel architecture described in Figure 30 can be configured to provide a tightly- or loosely-coupled integrated personal computing system 31-0.
  • one or more Activators 31-1, 31-2, 31-3, Executors 31-4, 31-5, 31-6 and Recorders 31-7, 31-8, 31-9 provide the primary processing capability of the system, capable of supporting several concurrent, and perhaps massively parallel, event-driven models simultaneously.
  • the major interfaces of the system i.e. a Human-Technology Interface 31-10, a Long Term Storage System 31-11, a Print System 31-12 and a Communications System
  • a Human-Technology Interface 31-10, a Long Term Storage System 31-11, a Print System 31-12 and a Communications System are here provided by further root meta-processor based machines.
  • Each of these root meta- processor based machines contains at least one Activator, Executor and Recorder connected via a Channel as shown, together with further interfaces both for connecting to the primary internal channel of the core processor and for connecting to dedicated peripheral interfaces, such as a keyboard controller 31-14 of the Human-Technology Interface 31-10 or a Disk Controller 31-15 of the Long Term Storage System 31-11. Distributing this interface activity to dedicated interface processors 31-14, 31-15 frees the core processor from being involved in such activities (unlike in a conventional Von-Neumann processor), significantly increasing the throughput of the primary processor.
  • the activators 31- 1, 31-2, 31-3, the executors 31-4, 31-5, 31-6 and the recorders 31-7, 31-8, 31-9 acting together constitute the primary processor.
  • This configuration 31-0 could be further distributed, for example, by combining associated devices such as the speaker and microphone into a headset 31-16 with its own root meta-processor. This would allow the headset 31-16 to undertake some aspects of the aural/oral interface, such as voice recognition or voice synthesis, without even involving the wider Human-Technology Interface 31-10.
  • an integrated personal computing system 31-0 could offer the features of a conventional personal computer (PC), a mobile telephone, a personal digital assistant (PDA) and a digital watch without the device and data redundancy which undermines the present-day collection of poorly integrated devices.
  • PC personal computer
  • PDA personal digital assistant
  • Figure 31B shows how an operating system 31-20 might be designed for such a root meta-processor based integrated personal computing system 31-0.
  • the bottom layer of the diagram shows selected elements of the integrated personal computer hardware configuration shown in Figure 31 A, connected by Channels. Above them are the layers of the operating system which are common to all, or distinct to each, processor type.
  • the User Application 31-21, System Call Handler 31-22 and Hide the low- level hardware 31-23 layers are similar or common across all processor types, and are similar to the equivalent layers in a conventional layered operating system (see Figure 5). The key differences to conventional layered operating systems are in the intermediate layers.
  • the Virtual Activation, Virtual Execution and Virtual Object Management layers 31-24, 31-25, 31-26 each provide an implementation of the behavioural models for the Activator 31-1, Executor 31-4 and Recorder 31-7 respectively, based on the generic models described in Figure 18.
  • a Virtual Model Execution Management layer 31-27 which provides for multiple models to be in-process simultaneously, without adversely impacting one another.
  • AU other processor types also need a common platform of Virtual Activation, Execution and Object Management, as well as some elements of Virtual Model Execution Management, since each has a root meta- processor configuration at its core.
  • the operating system 31-20 allows the operating system 31-20 to be enriched with components often relegated to the user application layer in conventional layered operating systems.
  • allowing the headset operating system 31-16 to search might incorporate a voice recognition element (not shown) which can be tuned to the voice of a single individual (the headset wearer) and interfaced to potentially multiple user applications within the integrated personal computing system 31-0.
  • the operating system described in Figure 31 B contrasts with a conventional modern layered operating system design, as shown in Figure 5, in which all the layers of the operating system are intermingled on a single processor (CPU). It also contrasts with previous designs for multiple processors, such as the known multi-processor, multi-computer and distributed computer configurations.
  • activity scheduling (conventionally "process” or “thread” scheduling) and activity synchronisation, which are the two key challenges for conventional multiple processor operating systems, are both considerably simplified, enabling a wide range of tightly- to loosely-coupled configurations of multiple processors, which would necessitate a considerable increase in complexity for arranging multiple processors of conventional Von- Neumann type processors.
  • Activity scheduling is simpler since a sequential process context does not have to be switched in and out of the CPU as the Scheduler selects a new process to run.
  • Activity synchronisation is simpler because, in contrast to a conventional sequential process driven by a clock cycle and an automatically incrementing program counter, the natural state of an event-driven model is to wait until an event, such as an input from another activity, is triggered.
  • the subject system of an operating system is a system of computer resources and/or virtual machines.
  • Figure 32A shows one approach to elaborating a virtual root meta-actor on conventional (i.e. Von Neumann-type) computer hardware, employing static translation, such as compilation or interpretation.
  • static translation such as compilation or interpretation.
  • Translator 32-1 (i.e.. a complier or interpreter) is created using an Application Meta- Model 32-2 together with a meta-model 32-3 for the underlying hardware.
  • the Meta-Translator 32-1 contains rules similar to the Elaborator component of a Virtual Layered Meta- Actor, together with additional rules and constraints required in view of the sequential nature of the underlying hardware.
  • Application Models 32-4 can then be statically translated into Object Code 32-5 which can be directly executed on an underlying computer 32-6, just like any other compiled program.
  • a translation 32-7 from the Meta-Translator 32-1 to the Object code 32-5 is static, and from there it is a dynamic elaboration 32-8 onto the hardware 32-6.
  • Inputs to the Meta-Translator 32-1 are model inputs from the Application Model 32-4.
  • conventional computer will be understood to include computers having a processor and memory coupled to a bus, and to include those having processors connected in parallel, such as supercomputers.
  • Figure 32B shows an alternative approach to elaborating a virtual root meta-actor on conventional (i.e. Von Neumann-type) computer hardware, employing a virtual elaboration machine.
  • the Meta-Translator 32-1 is again required, but on this occasion is used to create Object Code for a Virtual Machine (VM) which runs on top of the underlying computer 32-6.
  • the VM creates a pseudo-root meta-actor machine on which Application Models can be elaborated, via a VM- specific Application Elaborator 32-10.
  • the Application Elaborator 32-10 and VM 32-9 together handle the semantic gap between the dynamic, parallel model, meta-model and root meta-model and the static, sequential nature of the underlying machine 32- 6.
  • Root meta-actor based software is different to conventional software in that only the elementary actions needs to be translated onto the underlying machine
  • a root meta-actor based simulator is a special type of elaborator which enables the model to be executed against a proxy subject system to enable both the details and dynamics of the model to be explored without impacting the intended subject system.
  • a simulator includes rules for handling: the relationships between simulation, physical and wall-clock time; the distribution of internal and external events, for example the frequency of customer orders or the proportion of orders expected for different product specifications; the creation and deletion of simulated actors, and their assignment and reassignment of roles within the model being simulated; and the assignment and elaboration of simulated actors to actual physical actors within the simulation system.
  • a distributed system might be used in, for example, a war gaming situation, in which different units are located at different places.
  • One or more simulated actors is handled by each node of the physical simulation system, and the simulation manager will have rules for routing messages between the different nodes as the simulated actors communicate with each other.
  • the root meta-actor based simulator approach enables simulation of the system dynamics directly from the model of the system. This makes simulation more accessible to business and technology change programmes, thereby increasing the likelihood that simulation will be used, with a consequent likely improvement in the quality and performance in implemented changes.
  • Both analytic simulators and digital virtual environments can be simulated on a root meta-actor based platform. This is of particular use during a business change project in which, once modelled, a particular process can be simulated to analyse its likely performance, and can then be taught to workers through a digital virtual environment employing the same model.
  • a root meta-actor based simulator also offers the opportunity for more sophisticated simulations than the prior art in two ways. Firstly, by enabling the creation and deletion of simulated actors during the simulation, the dynamics of complex systems can be explored directly from the details of the individual actor model. This is particularly useful for simulations in which there are dynamics within an actor group, such as is found with marketing and military applications. For instance, in a market simulation, the individual behaviours of customers can be modelled, including the way in which they might pass on information by word of mouth. The simulation can then create new actors as positive news is communicated or, conversely, delete actors as negative news is propagated or as a competitor increases market share. Secondly, simulations of models which include meta-models can readily include adaptation of the system's rules during the simulation run, which can be particularly useful in certain gaming, training or design workshop applications.
  • root meta-actor based simulators and root meta-actor based computing platforms eliminates the complexities associated with synchronisation between logical processes in a parallel implementation by exploiting the fine-grained parallelism of the massively micro-parallel architecture described above with reference to Figure 30.
  • a distributed system might also be used in, for example, a war gaming situation, in which different units are located at different places.
  • One or more simulated actors is handled by each node of the physical simulation system, and the simulator must manage the routing of messages between the different nodes as the simulated actors communicate with each other.
  • the use of a root meta-actor based simulator removes other complexities, simplifying the implementation of parallel and distributed simulations.
  • the subject system of a simulator is an artificial replica of an intended or real-world subject system, for the purpose either of collecting and analysing data about the actor's dynamic performance or of creating at least a component of a digital virtual environment.
  • the artificial replica will usually involve some management of simulation time to allow it to be significantly faster or slower than would be the case in the intended or real-world subject system.
  • FIG. 33 A high level self-adapting system is illustrated in Figure 33.
  • a directing actor 33-1, an operating actor 33-2, a managing actor 33-3, a learning actor 33-4 and an enabling actor 33-5 are each connected to a communication channel 33- 6.
  • Each of the five illustrated actors can be a root meta-actor as described above.
  • the operating actor 33-2 performs a core operation, sometimes called the
  • the directing actor 33-1 provides an overall direction, in terms of purpose, performance goals and constraints.
  • the managing actor 33-3 plans, monitors and controls the operation of the system, in light of the direction provided by the directing actor 33-1.
  • the learning actor 33-4 allows the self- adaptive system to learn, in order that the overall system may improve or maximise its performance.
  • the enabling actor 33-5 has the function of acquiring or developing the actors which enable the system to effect all its other activities. In this context, the actors acquired or developed by the enabling actor 33-5 might be humans, machines, software, etc.
  • the self-adaptive system could be a "viable intelligent agent", as discussed in more detail below.
  • the subject system of a self-adaptive actor is similar to the subject system for a simple or composite actor.
  • Figure 34 shows how each of the directing actor 33-1, the enabling actor 33-5, the operating actor 33-2, the learning actor 33-4 and the managing actor 33-3 may include their own respective self-adaptive system. This can be considered as a recursive self-adaptive system.
  • the subject system of an operating actor 33-2 is the same subject system as for the composite, self-adaptive actor — i.e. it is the operating actor component of the self- adaptive actor which actually operates on the subject system of the self-adaptive actor.
  • the subject system of a directing actor 33-1 is the system of purposes and performance goals for a self-adaptive actor.
  • the subject system of a managing actor 33-3 is a system of plans and monitoring metrics for a self-adaptive actor, based on the purpose and performance goals defined by a directing actor 33-1.
  • the subject system of a learning actor 33-4 is a (meta-)system of models and meta-models employed by all actors within a composite, self-adaptive actor.
  • the subject system of a enabling actor 33-5 is a system of actors capable of performing the duties of all actors within the composite, self-adaptive actor.
  • a multiply recursive self-adaptive system is illustrated in Figure 35.
  • the learning actor 33-4 includes a self-adaptive system including a learning actor 35-1 which itself includes a further self-adaptive system.
  • the operating actor 33-2 includes a self-adaptive system, an operator 35-2 of which includes a further self-adaptive system having an operating actor 35-3 which is a further self-adaptive system. It will be appreciated how this concept can be applied to multiple recursion within the enabling actor 33-5, the directing actor 33-1 and the managing actor 33-3. Although in the example of Figure 35 it is the same actor which is subjected to recursion within a top-level actor, this is not necessary.
  • a root meta-actor based artificial intelligent agent can be embodied in a set of co ⁇ operating virtual actors, each of the actors representing one of the actors shown in Figure 33.
  • This intelligent agent has a number of key differences to a conventional expert system inference engine.
  • the model component can be considered as fully event-driven, rather than as using the conventional IF, THEN production rules.
  • the generic root meta-processor undertakes a partial scan of the rules of a model based on events generated in a previous cycle, which provides improved efficiency compared to a scheme in which a full scan of the rule base is performed every cycle, as occurs in conventional expert system inference engines.
  • a meta-actor based intelligent agent includes an enabling actor 33-5 which is capable of recruiting or developing actors to fulfil new roles, as the roles are identified by the learning actor 33-4, and could, given adequate authority, enable the entire artificial intelligent agent to become viable.
  • Providing a root meta-actor based intelligent agent with sensors and actuators, (for example, within a robot environment) allows the agent to have the real world as its subject system.
  • system is used here in all of the senses described earlier with reference to Figure 1.) It is possible also to set up the system development project operation as four dependent activity systems, namely investigation, development, preparation and deployment systems.
  • the investigation system is triggered by disruption to the system which is the target of the overall system development project. Its purpose is to understand some issue or opportunity, and to arrive at a model as to how it should be addressed.
  • the development system is triggered by a defined requirement to change the target system from the investigation system.
  • the development system designs, constructs and simulates that change.
  • the preparation system also is triggered by a defined change requirement from investigation.
  • the preparation system designs, constructs and simulates the temporary system by which the change will be deployed.
  • the deployment system is triggered by completion of both the development and preparation systems.
  • the deployment system executes the temporary system defined in the preparation system to implement the change to the target system defined in the development system.
  • the development and preparation systems are set up with similar patters of activity to the investigation system, namely modelling and simulation.
  • the modelling pattern defines the rules by which the subject system will operate and assigns and elaborates actors to roles, each actor handling individual responsibilities within these rules.
  • the simulation system both tests the details of the rules and roles modelled and analyses the dynamics of the resulting interactions.
  • FIG. 36 a self-adaptive system is shown in which an operating actor 37-1 includes an investigation system 37-2, a development system 37-3, a preparation system 37-4 and a deployment system 37-5.
  • an investigation system 37-2 Within the investigation system are included a modelling system 37-6 and a simulation system 37-7.
  • the development and preparation systems 37-3, 37-4 also include model and simulation systems 37-6, 37-7, although these are omitted from the Figure for sake of clarity.
  • the development and preparation systems 37-3, 37-4 are connected in parallel to the output of the investigation system 37-2.
  • the development and preparation systems 37-3, 37-4 also are connected to each other so that they may interact with each other. Each of these systems has an output connected to the deployment system 37-5.
  • the key differences between the investigation, development and preparation activity systems are what is being modelled and simulated.
  • For the investigation system it is a problem or an opportunity within the target system which is being modelled and simulated.
  • the purpose of the investigation is to obtain sufficient understanding of what needs to be changed in order to initiate development and preparation.
  • For the development system the change to the target system is itself being modelled and simulated.
  • For the preparation system it is the system by which the change will be deployed that is being modelled and simulated.
  • the preparation system is a temporary system which exists only during the transition between the current state and the future state of the target system. However, the preparation system may exist for several months or more where the change is for example a significant change to a large corporation, which may involve training and new roles and responsibilities, hardware and software sourcing and set up, etc.
  • This system development methodology is event driven, rather than flow driven as is found in prior art system development methodology (see Figures 9 A and 9B). It is also meta-model based, which can be used to define classes of projects which can be re-used.
  • the system development methodology is also a learning system, and thus can learn to act in order to improve its performance.
  • the methodology also incorporates layered virtual actors, from hardware architecture to high level design.
  • the above described systems development methodology incorporates an integrated model, does not include semantic gaps between the problem and implementation domains, does not require the translation of models at one stage to models at a later stage; instead it is needed only to provide elaborators for one or more virtual actors in the lower layers of the architecture.
  • the same activities and techniques do the same thing at different stages and within different domains, for example in software, hardware and process domains. This allows the taking of the decisions concerning the assignment of roles to different types of actors (for example software, or hardware) later in the cycle than is found in a conventional methodology, which enables greater flexibility. It also offers the potential of reusability to be realised, and the benefits thereof to be obtained, since the layered, event-driven meta-models which drive all the systems involved can be more suitable as a basis for creating generic, reusable components.
  • the subject system of a system development methodology is a system of system change, including complex technical and socio-technical system change.
  • an object can be a representation of a person, place (location), event, action or thing (for example a physical object).
  • An object may be elementary, for example a value O 1 (e.g.. an integer or character), or a reference O 2 (e.g. a pointer to something).
  • An object may be compound, for example a composite object O 3 or a collection of objects, such as a set O 6 or O 7 .
  • An object may instead be a sequence or array etc. (not shown).
  • An event may be considered to be an object which reflects an instantaneous change in the state of another object.
  • all the arrow boxes labelled OC, ⁇ , E 1 , T, F, ANY and ⁇ represent events.
  • An event may be elementary, in which case it might either be an object event or an action event.
  • Object events include created (a), deleted (not shown) and changed or modified ( ⁇ ).
  • Action events include initiated (not explicitly shown), completed ( ⁇ ), and completed-in- state (T and F).
  • An initiated event occurs when an action begins to be executed.
  • the event E 1 initiates action A 1 which in turn initiates action A 1-1 , shown by the link from the top left corner Of A 1 to the top left corner Of A 1-1 .
  • a completion ( ⁇ ) event occurs when execution of an action is finished. For some actions, it is the outcome of the action which is important. This applies particularly to binary tests which can result in a True T or False F outcome, but may also apply to n-ary tests (for example "CASE" statements in high level languages) where there may be more than two outcome states (not shown).
  • the resulting events can be termed
  • a compound event could be an ANY event which occurs when one of its components occurs.
  • a compound event could be an ALL event (not shown) which occurs when all components of it have occurred.
  • An action is an object which is initiated in response to an event and which causes changes in other objects.
  • All the boxes with rounded corners labelled A 1 and A 1 1 to A 1-4 represent actions.
  • An action may be elementary A 1-1 to A 1-4 or compound A 1 .
  • a compound action A 1 is made up of sub-actions A 1 1 to A 1-4 each of which is in turn initiated by events which are triggered either directly or indirectly by the initiation of the composite action.
  • a sequence one sub-action after another, as action A 1-4 is initiated by the completion of actions A 1-2 or A 1-3 ), a concurrence (two or more sub-actions in parallel, not shown), a recursion (where the sub-action references the compound action, either directly or indirectly, not shown), a selection such as an IF THEN (skip) action, or an IF THEN ELSE (two- way) action (shown as a test action A 1-1 followed by alternative actions A 1-2 initiated if A 1-1 terminates in a state of True or A 1-3 initiated if A 1-1 terminates in a state of False), or a CASE (n-way selection, not shown) action, a repetition, which repeats a sub-action zero or more times (WHILE, not shown), or one or more times
  • root meta-actor based models include that they employ root meta- generic execution models, and that they are event driven and parallel (as opposed to flow driven and sequential as found in the prior art). Also, exceptions are treated just as another kind of event, whereas their handling is more complicated with prior art modelling. It also allows procedures for human beings to be written in the same language as procedures for machines. Furthermore, constraints can be handled within the primary modelling language, without resorting to an "add-on”. This is in contrast to UML, for example, where constraints are handled by OCL. Root meta- actor based models also provide the precision required for hardware and software modelling together with the visual presentation required when communicating processes to humans. Importantly, making the models directly implementable allows them to be tested, simulated and employed directly, without the need to translate to another language or paradigm.
  • the or each model may take any suitable form.
  • it may be a graphical representation of the subject system, or alternatively a set of computer-readable instructions.
  • at least some of the instructions are non- sequential, and can be for example in set theory or mathematical notation.
  • the computing devices described above, particularly in relation to Figure 28, 29 and 30, preferably are electronic computing devices. Alternatively, they may be bio- mechanical, quantum or any other type of computer device instead. It will be appreciated by those skilled in the relevant art how such other forms of computer device how may be constructed.
  • Any of the computing devices described may be provided as part of, or packaged for use in or as, a general purpose computer, a manufacturing or process control device system, a network infrastructure device, a mobile computing device, a mobile communication device, a domestic appliance, a vehicle, a computer peripheral or a robot, for example.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Automation & Control Theory (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Multimedia (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Algebra (AREA)
  • Mathematical Optimization (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Nonlinear Science (AREA)
  • Processing Or Creating Images (AREA)
  • Stored Programmes (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Feedback Control In General (AREA)

Abstract

L'invention concerne un système actif, dans lequel un acteur (10-1) peut réaliser une action (10-4) dans un système (10-2) d'intérêt. L'acteur (10-1) et ledit système (10-2) existent dans un environnement (10-3) pouvant affecter le système (10-2). Ni l'acteur (10-1) ni le système (10-2) ne peuvent contrôler l'environnement (10-3). L'acteur (10-1) comprend un modèle (11-1) et un processeur (11-2). Le processeur (11-2) est guidé (11-5) par le modèle (11-1). Le processeur (11-2) est destiné à exécuter une action (11-4) dans le système (10- 2). Le système (10-2) est connu (11-3) par le modèle (11-1). Ceci permet à l'acteur (10-1) d'être guidé dans son action (11-4) sur le système (10-2) par le modèle (11-1) du système (10-2). Des évènements peuvent se produire dans le système (10-2), soit par les actions de l'acteur (10-1) guidé par le modèle (11-1), soit par les actions d'autres acteurs, soit par un changement d'état du système lui-même (par exemple la progression d'une réaction chimique) ou de son environnement (10-3) (par exemple l'écoulement du temps). L'acteur (10-1) met à jour le modèle (11-1) par ses propres actions. Lorsque le processeur (11-2) exécute un traitement en fonction du modèle (11-1), il met à jour le modèle (11-1) par des actions intermédiaires et les actions exécutées dans le système (10-2).
PCT/EP2005/055310 2004-10-18 2005-10-17 Action sur un systeme d'interet WO2006042841A2 (fr)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2007536195A JP5128949B2 (ja) 2004-10-18 2005-10-17 サブジェクトシステムへの作用
GB0613704A GB2425868B (en) 2004-10-18 2005-10-17 Logic-based Computing Device and Method
CA2583921A CA2583921C (fr) 2004-10-18 2005-10-17 Dispositif de calcul fonde sur la logique et methode
AU2005296859A AU2005296859B2 (en) 2004-10-18 2005-10-17 Acting on a subject system
CN2005800376254A CN101288090B (zh) 2004-10-18 2005-10-17 对主题系统的动作施加
EP05806079A EP1805704A2 (fr) 2004-10-18 2005-10-17 Action sur un systeme d'interet

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB0423110.6A GB0423110D0 (en) 2004-10-18 2004-10-18 Acting on a subject system
GB0423110.6 2004-10-18

Publications (2)

Publication Number Publication Date
WO2006042841A2 true WO2006042841A2 (fr) 2006-04-27
WO2006042841A8 WO2006042841A8 (fr) 2008-01-17

Family

ID=33462924

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2005/055310 WO2006042841A2 (fr) 2004-10-18 2005-10-17 Action sur un systeme d'interet

Country Status (7)

Country Link
EP (1) EP1805704A2 (fr)
JP (1) JP5128949B2 (fr)
CN (1) CN101288090B (fr)
AU (1) AU2005296859B2 (fr)
CA (1) CA2583921C (fr)
GB (3) GB0423110D0 (fr)
WO (1) WO2006042841A2 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110066566A1 (en) * 2009-09-16 2011-03-17 International Business Machines Corporation Conceptual representation of business processes for cross-domain mapping
US8401992B2 (en) 2009-02-06 2013-03-19 IT Actual, Sdn. Bhd. Computing platform based on a hierarchy of nested data structures

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8078552B2 (en) * 2008-03-08 2011-12-13 Tokyo Electron Limited Autonomous adaptive system and method for improving semiconductor manufacturing quality
US8725667B2 (en) 2008-03-08 2014-05-13 Tokyo Electron Limited Method and system for detection of tool performance degradation and mismatch
US8190543B2 (en) 2008-03-08 2012-05-29 Tokyo Electron Limited Autonomous biologically based learning tool
US8396582B2 (en) 2008-03-08 2013-03-12 Tokyo Electron Limited Method and apparatus for self-learning and self-improving a semiconductor manufacturing tool
AU2011380289B2 (en) * 2011-10-31 2015-03-26 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for synchronizing events
KR102011094B1 (ko) * 2017-06-28 2019-08-14 백옥기 인공지능의 신경가소성 및 자가적응 유연성을 구현하기 위하여 자가적응형 동적 다차원 배열을 관리하는 방법과 이를 이용한 컴퓨터 소프트웨어
CN109460214B (zh) * 2018-11-06 2021-07-27 上海航天测控通信研究所 基于idef建模的航天器软件结构化方法
CN110164082B (zh) * 2019-06-21 2021-01-15 山东大学 家庭安防报警系统的静态控制器设计方法及系统
CN117099057A (zh) * 2021-04-02 2023-11-21 三菱电机株式会社 程序生成装置和程序生成方法

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IE873207L (en) * 1987-11-26 1989-05-26 Schering Ag An energy management system
US5596331A (en) * 1988-05-13 1997-01-21 Lockheed Martin Corporation Real-time control sequencer with state matrix logic
JPH02109127A (ja) * 1988-10-19 1990-04-20 Hitachi Ltd 仕様処理方法
US5638539A (en) * 1994-02-28 1997-06-10 International Business Machines Corporation Tool for defining complex systems
US5887143A (en) * 1995-10-26 1999-03-23 Hitachi, Ltd. Apparatus and method for synchronizing execution of programs in a distributed real-time computing system
JP3598732B2 (ja) * 1997-05-20 2004-12-08 三菱電機株式会社 分散制御システムの構成管理方法およびこれに用いるデータ
JPH113108A (ja) * 1997-06-13 1999-01-06 Sony Corp 加工制御方法及び加工制御装置
WO1999030230A1 (fr) * 1997-12-12 1999-06-17 Cacheon, L.L.C. Systeme et procede de traitement informatique naturellement parallele
JP3863069B2 (ja) * 2002-06-06 2006-12-27 本田技研工業株式会社 プラントの制御装置
JP2004164328A (ja) * 2002-11-13 2004-06-10 Fujitsu Ltd 一人camシステムおよび一人camプログラム
US7006900B2 (en) * 2002-11-14 2006-02-28 Asm International N.V. Hybrid cascade model-based predictive control system
AU2003290932A1 (en) * 2002-11-15 2004-06-15 Applied Materials, Inc. Method, system and medium for controlling manufacture process having multivariate input parameters
JP3940665B2 (ja) * 2002-11-27 2007-07-04 株式会社東芝 ハイブリッドシミュレーション装置およびプログラム

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ACKOFFS: "Towards a System of Systems Concepts", MANAGEMENT SCIENCE, 17 July 1971 (1971-07-17), pages 661 - 671
C.A.R. HOARE: "Communicating Sequential Processes", COMMUNICATIONS OF THE ACM, vol. 21, 1978, pages 666 - 677, XP058231723, DOI: doi:10.1145/359576.359585
COLWELL, R. P.; STECK, R. L.: "Proceedings of the International Solid State Circuits Conference", February 1995, article "A O.61lm BiCMOS Processor with Dynamic Execution"
DIJKSTRA, E. W.: "Guarded Commands, Nondeterminacy and Formal Derivation of Programs", COMMUNICATIONS OF THE ACM, vol. 18, 1975, pages 453 - 457, XP058152113, DOI: doi:10.1145/360933.360975
GUL AGHA: "Actors: A Model of Concurrent Computation in Distributed Systems", 1986, MIT PRESS
See also references of EP1805704A2

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8401992B2 (en) 2009-02-06 2013-03-19 IT Actual, Sdn. Bhd. Computing platform based on a hierarchy of nested data structures
US20110066566A1 (en) * 2009-09-16 2011-03-17 International Business Machines Corporation Conceptual representation of business processes for cross-domain mapping
US11315208B2 (en) * 2009-09-16 2022-04-26 International Business Machines Corporation Conceptual representation of business processes for cross-domain mapping

Also Published As

Publication number Publication date
GB0423110D0 (en) 2004-11-17
GB2428322A9 (en) 2007-02-12
GB0613704D0 (en) 2006-08-23
CN101288090B (zh) 2013-09-04
GB2425868B (en) 2007-07-04
GB2425868A (en) 2006-11-08
GB0618546D0 (en) 2006-11-01
GB2428322A (en) 2007-01-24
JP2008517362A (ja) 2008-05-22
CN101288090A (zh) 2008-10-15
WO2006042841A8 (fr) 2008-01-17
EP1805704A2 (fr) 2007-07-11
CA2583921A1 (fr) 2006-04-27
JP5128949B2 (ja) 2013-01-23
CA2583921C (fr) 2017-01-24
AU2005296859B2 (en) 2011-03-10
AU2005296859A1 (en) 2006-04-27

Similar Documents

Publication Publication Date Title
US7822592B2 (en) Acting on a subject system
AU2005296859B2 (en) Acting on a subject system
Naujokat et al. CINCO: a simplicity-driven approach to full generation of domain-specific graphical modeling tools
Cossentino From requirements to code with PASSI methodology
Debbabi et al. Verification and validation in systems engineering: assessing UML/SysML design models
Cadoret et al. Design patterns for rule-based refinement of safety critical embedded systems models
AU2003227991A1 (en) Generation of executable processes for distribution
Manolescu Workflow enactment with continuation and future objects
Kamburjan et al. Session-based compositional analysis for actor-based languages using futures
Bodorik et al. Tabs: Transforming automatically bpmn models into blockchain smart contracts
Degano et al. A two-component language for adaptation: design, semantics and program analysis
Schlatte et al. Release the beasts: When formal methods meet real world data
Griss Product-line architectures
Achten et al. An introduction to task oriented programming
Chrszon et al. Modeling role-based systems with exogenous coordination
Brown et al. Enterprise-scale CBD: Building complex computer systems from components
Colburn et al. Decoupling as a fundamental value of computer science
Arronategui et al. Towards an architecture proposal for federation of distributed DES simulators
Fiadeiro Software services: Scientific challenge or industrial hype?
Ferigo et al. A generic synchronous dataflow architecture to rapidly prototype and deploy robot controllers
Sendall Specifying reactive system behavior
Alhaj Automatic Derivation of Performance Models in the Context of Model-Driven SOA
Wand et al. Computing tomorrow: future research directions in computer science
Poddar et al. Verification of Giotto based Embedded Control Systems.
Majumdar Robots at the edge of the cloud

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200580037625.4

Country of ref document: CN

AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 0613704.6

Country of ref document: GB

Ref document number: 0613704

Country of ref document: GB

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2583921

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2007536195

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2005296859

Country of ref document: AU

ENP Entry into the national phase

Ref document number: 2005296859

Country of ref document: AU

Date of ref document: 20051017

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2005806079

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2154/CHENP/2007

Country of ref document: IN

WWP Wipo information: published in national office

Ref document number: 2005806079

Country of ref document: EP