GB2468859A - Processing a condensed graph on multiple machines using proxies - Google Patents

Processing a condensed graph on multiple machines using proxies Download PDF

Info

Publication number
GB2468859A
GB2468859A GB0904956A GB0904956A GB2468859A GB 2468859 A GB2468859 A GB 2468859A GB 0904956 A GB0904956 A GB 0904956A GB 0904956 A GB0904956 A GB 0904956A GB 2468859 A GB2468859 A GB 2468859A
Authority
GB
United Kingdom
Prior art keywords
node
proxy
machine
port
storing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0904956A
Other versions
GB0904956D0 (en
Inventor
John Patrick Morrison
James John Kennedy
David Anthony Power
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University College Cork
Original Assignee
University College Cork
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University College Cork filed Critical University College Cork
Priority to GB0904956A priority Critical patent/GB2468859A/en
Publication of GB0904956D0 publication Critical patent/GB0904956D0/en
Priority to PCT/EP2010/053852 priority patent/WO2010108967A2/en
Publication of GB2468859A publication Critical patent/GB2468859A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/82Architectures of general purpose stored program computers data or demand driven
    • G06F15/825Dataflow computers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4494Execution paradigms, e.g. implementations of programming paradigms data driven

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Advance Control (AREA)
  • Computer And Data Communications (AREA)

Abstract

A condensed graph consists of a number of nodes. Each node has input ports and output ports. The output ports of one node are connected to the input port of another node. If a first node 31 is executed on a different machine 81 to the machine 83 on which a second node 33 is executed and an output of the first node is connected to an input of the second node, then a proxy 85 for the first node is created on the machine on which the second node is executed. A proxy 87 for the proxy may be created on the machine where the first node is executed.

Description

INTELLECTUAL
. .... PROPERTY OFFICE Application No. GB0904956.O RTM Date:23 June 2009 The following terms are registered trademarks and should be read as such wherever they occur in this document: Java Intellectual Property Office is an operating name of the Patent Office www.ipo.gov.uk "A method of processing a condensed graph"
Introduction
This invention relates to a method of processing a condensed graph. More specifically, this invention relates to a method of processing a condensed graph spread over a plurality of machines and a method of partitioning the condensed graph amongst the plurality of machines.
Numerous techniques have been developed to reduce the processing time taken to process large amounts of data and/or complex algorithms. Included among these are distributed computing, metacomputing and more recently grid computing techniques. In each of these techniques, the computational burden is distributed over a plurality of machines resulting in many instances in a significant reduction in the processing time.
Other methodologies have been developed in parallel to reduce the processing time taken to process large amounts of data and/or complex algorithms. One such methodology that is particularly advantageous is commonly referred to as condensed graphs. A condensed graphs approach entails using graphs to represent and simplify the code structure thereby facilitating a reduction in the processing time of the code.
It is believed that further speed up could be achieved by successfully combining a condensed graphs approach with any one of the distributed computing, metacomputing and grid computing techniques. However, heretofore, entirely successful integration of a condensed graphs approach with the above-mentioned computing techniques has not been achieved. This is due mainly to problems encountered when partitioning a condensed graph among numerous disparate machines. In particular, problems arise when nodes of a condensed graph are sent to a remote machine for execution while other nodes that may supply operand values to the first nodes are not also sent to the remote machine. There are a couple of methods in which a node that feeds into that operand may be handled. For example, the node may be copied in its entirety and sent with the original node to the remote machine. Alternatively, the node may be executed in advance and its result sent with the original node to the remote machine. However, neither of these methods is entirely satisfactory and neither approach is complicit with a true condensed graph model.
It is an object of the present invention to provide a method of processing a condensed graph that overcomes at least some of the problems with the known methods.
Statements of Invention
According to the invention there is provided a method of processing a condensed graph spread over a plurality of machines, the condensed graph comprising a plurality of nodes including a first node located on a first machine and a second node located on a second machine, the second node having an operand port supplied by the destination port of the first node, the method comprising: creating a first node destination port proxy object for the operand port of the second node and sending that first node destination port proxy object to the second machine; the second machine generating a first node proxy for the first node with the first node destination port proxy object; the first node proxy receiving a request to execute, the first node proxy transmitting an instruction to the first machine to execute the first node; the first machine receiving the request, creating a first node proxy proxy, providing the first node with the first node proxy proxy as a destination value, executing the first node and passing the result of the execution of the first node to the first node proxy proxy; the first node proxy proxy executing and thereafter transmitting the result of the execution of the first node to the first node proxy on the second machine; and populating the destination port of the first node proxy with the result from the first node.
By having such a method, it will not be necessary to evaluate the first node that feeds into the operand port of the second node prior to sending the second node to the remote machine. Furthermore, it will not be necessary to copy the first node and transmit the copy of the node in its entirety to the remote machine with the second node. This will allow the method to maintain the dependencies that were expressed in the original condensed graph and will comply with a true condensed graph model. Furthermore, the method will avoid unnecessary processing and duplication and the method will allow for the condensed graph to be distributed among a number of machines in a relatively straightforward manner.
In another embodiment of the invention, the method further comprises the step of storing first machine information in the first node proxy In one embodiment of the invention, the step of storing first machine information in the first node proxy comprises storing the name of the first machine as an operand of the first node proxy.
In one embodiment of the invention, the method further comprises the step of storing a first node port information in the first node proxy.
In one embodiment of the invention, the step of storing the first node port information in the first node proxy comprises storing a port proxy identifier as an operand of the first node proxy.
In one embodiment of the invention, the method further comprises the step of storing second machine information in the first node proxy proxy.
In one embodiment of the invention, the step of storing second machine information in the first node proxy proxy comprises storing the name of the second machine as an operand of the first node proxy proxy.
In one embodiment of the invention, the method further comprises the step of storing a first node proxy port information in the first node proxy proxy.
In one embodiment of the invention, the step of storing the first node proxy port information in the first node proxy proxy comprises storing a port proxy identifier as an operand of the first node proxy proxy.
In one embodiment of the invention, the first node destination port proxy object comprises a port proxy identifier representative of a first machine destination port reference and the method comprises the step of storing the port proxy identifier in a table with the first machine destination port reference.
In one embodiment of the invention, the method of partitioning a condensed graph amongst a plurality of machines, the condensed graph comprising a plurality of nodes including a first node whose destination port is connected to an operand port of a second node, the method comprising the steps of: transmitting the second node to the remote machine; and creating a first node proxy on the second machine.
In one embodiment of the invention the method further comprises the step of creating a first node proxy proxy on the first machine for the destination port of the first node.
In a further embodiment of the invention there is provided a computer program product having program instructions for causing a computer to perform the method.
In a further embodiment of the invention there is provided a condensed graph machine comprising a Triple manager and at least one ancillary processor, the condensed graph machine running computer program code having program instructions for causing the condensed graph machine to operate according to the method described.
Detailed Description of the Invention
The invention will now be more clearly understood from the following description of some embodiments thereof given by way of example only with reference to the accompanying drawings, in which:-Figure 1 is an example of a simple condensed graph; Figure 2 is a detailed view of a node of a condensed graph; Figure 3 is a portion of another condensed graph; Figure 4 is a portion of another condensed graph showing a "lazy" computing approach; Figure 5 is a portion of another condensed graph; Figure 6 is a portion of another condensed graph; Figure 7 is a diagrammatic representation of a pair of machines with the condensed graph shown in Figure 6 distributed between the machines; Figure 8 is a diagrammatic representation of the pair of machines shown in Figure 7 after Node B has requested an input from Node A; Figure 9 is a diagrammatic representation of a condensed graph machine; Figure 10 is a diagrammatic representation of the components of a Triple Manager; Figure 11 is a diagrammatic representation of a Generator/Consumer application associated with the backplane of a WebCom application; and Figure 12 is a diagrammatic representation of the Generator/Consumer application associated with the backplane of a WebCom application with a compute engine loaded therein.
Before discussing the invention in detail, it is deemed advantageous to provide a brief description of condensed graphs in general. Referring to the drawings and initially to Figure 1 thereof, there is shown a diagrammatic representation of a simple condensed graph, indicated generally by the reference numeral 1. The condensed graph 1 comprises a plurality of vertices, also commonly referred to as nodes, 3 and a plurality of edges, also commonly referred to as arcs, 5. The structure of a computation or an algorithm can be modelled using a graph in which operations and data can be modelled using vertices and sequencing constraints can be modelled using edges. An algorithm is essentially a finite set of instructions performed in a particular order to execute a specific task and graphs provide a natural way of expressing the algorithms.
Referring to Figure 2, there is shown a more detailed view of a condensed graph node 3.
The node 3 comprises a pair of operand ports 7, 9, an instruction port, otherwise referred to as a function port 11 and an output port, otherwise referred to as a destination port, 13. A port is an entry or an exit point of a node 3. When the node 3 has inputs on all of its ports 7, 9, 11, 13, it is said to be a fireable node. In other words, it has operand data on which to operate, a function with which to process the operand data and a destination to send the result to once the operand data has been processed, and it is possible to execute the node.
For example, the operand ports 7, 9 may each have an integer value thereon, the function port may be an "add" operation and a destination may be another node that requires the result of the summation. When all the ports have a value, it is possible to add the two integer values and send the result on to the destination node. The operand(s), function and destination are collectively referred to as a computation triple.
When all three parts of the computation triple are present, it is possible to execute the node, otherwise, one or more of the missing computational triple elements must be provided in order to execute the node. An instruction in an algorithm corresponds to a condensed graph representing all three elements of a computational triple.
One of the most significant advantages of using a condensed graphs model approach is that it incorporates an imperative/control driven model, an availability/data driven model and a coercion driven model of computation as special cases in the overall condensed graphs model. This provides significant flexibility to the manner in which the data is processed. Depending on which part of the computation triple is missing, a different case of the computational model can be employed. For example, if the operand data is missing from the computational triple, an availability model may be employed. This approach is also often referred to as dataf low or data driven computing. If the function data is missing from the computational triple, an imperative model is employed. Finally, if the destination data is missing from the computational triple, a coercion driven model is employed. This coercion driven model is often referred to as demand driven computing.
Referring to Figure 3, there is shown a portion of a condensed graph, indicated generally by the reference numeral 30, comprising a pair of nodes, node "A" 31 and node "B" 33.
The arc 34 from the output (destination) port 35 of node A to one of the input (operand) ports 37 of node B indicates that node B requires the result of node A before node B can execute. Node B further comprises a second operand port 38 and third operand port 38.
This is traditional dataflow computing. When node A fires, it sends its result on to the input port of Node B. When node B has all three of its operand values on its operand ports 37, 38, 39 it can then fire. In the example shown in Figure 3, the destination port and the function port of node B 33 and the operand port(s) and the function port of node A 31 have been omitted for clarity. It will be understood that in order for node B to fire, the function port and the destination port (not shown) of node B must also each have an input thereon.
Referring to Figure 4, there is shown a portion of another condensed graph, similar to that shown in Figure 3, where like parts have been given the same reference numerals as before. In the condensed graph shown in Figure 4, indicated generally by the reference numeral 40, the arc joining the output port 35 of node A to the input port 37 of node B has been replaced by arc 41. The arc 41 is shown in dashed form indicating that node A will not fire until Node B attempts to fire and realises that it needs a result from node A. This is effectively "lazy" or "demand driven" computing. Again, the destination port and the function port of node B 33 and the operand port(s) and the function port of node A 31 have been omitted for clarity. It will be understood that in order for node B to fire, the function port and the destination port (not shown) of node B must also each have an input thereon.
A more thorough description of condensed graphs, their implementation and their execution is provided in the published paper entitled "Condensed Graphs: Unifying Availability-Driven, Coercion-Driven and Control-Driven Computing" by John P. Morrison, (ISBN 90-386-0478-5) a co-inventor of the application in suit, and its entire contents are incorporated herein by way of reference. In particular, the content of that paper regarding representation of algorithms using condensed graphs, the manipulation of the condensed graphs including stemming, grafting and throttling techniques, and the aspects of parallelism (conservative and speculative), structure and storage is incorporated herein.
Referring to Figure 5, there is shown a specific implementation of a condensed graph, indicated by the reference numeral 50, to which the present invention is relevant. The condensed graph 50 comprises three nodes, node A 51, node C 52 and an If Else node 53. The IfElse node 53 comprises three operand input ports 55, 58, 60. The node A 51 comprises an output port 54 and the node C 52 comprises an output port 57. The output port 54 of node A 51 is connected to the input port 55 of the If Else node 53 by way of a dashed edge 56. The output port 57 of node C is connected to the input port 58 of the If Else node 53 by way of a second dashed edge 59. The operand port 60 of the lfElse node 53 has a control input for the IfElse node 53. Again, the destination port and the function port of the If Else node 53 and the operand ports and the function ports of node A 51 and node C 52 have been omitted for clarity. It will be understood that in order for the IfElse node 53 to tire, the function port and the destination port (not shown) of the node must also each have an input thereon.
In the embodiment shown, the IfElse node 53 does not need the result of Node A 51 in order to fire, as in the case described above with reference to Figure 4. The IfEtse node 53 has three operands but only absolutely requires a result from one of the operands, in this case operand 60, and an input from either of the other operands 55, 58. Operand 60 is used as a control input to If Else node 53 and we say that this operand 60 is "strict" and the operands 55, 58 are "non-strict" in that they only need a reference to another graph/node rather than a primitive value. -9.
Strictness is an attribute of the operand port entity and it is possible to have dashed lines going into strict operand ports. The strictness becomes relevant when deciding how to proceed. For example, taking a first case, a solid edge from node A to node B means that node A's destination port has a reference to node B, i.e. node B is currently a destination of node A. In a second case, a dashed arc means that node A has no destination, but node B has a reference to node A in an operand port. The strictness of the operand port in node B determines how node B will now fire. If the operand port is strict, then node B requires simple data value and so will graft onto node A, i.e. the arc will become solid (we now have the state described in the first case above. If the operand port on node B is non-strict, then B can fire with just the reference to node A as an input, e.g. the ifelse node can simply pass on that reference to node A without requiring that node A be evaluated. If it so happens that a data value is on the operand port when the ifelse fires then it simply passes on that value.
If the operand 60 input is "frue" the result of IfElse will be node A 51 itself and input 55 will be taken as the input. If the operand 60 is "false", the result of the If Else will be Node o 52 itself and the input 58 will be taken as the input. In this way, the nodes can be passed through the graph unevaluated, and the nodes will only fire when their results are really needed. The example above saves the machine from having to evaluate nodes A and node C before deciding which one isn't needed. In other words, if node A 51 is selected, the instruction represented by Node A is passed through the condensed graph and will not have to be evaluated until a result is needed.
Referring to Figure 6 there is shown portion of the condensed graph shown in Figure 4 where some parts have been removed for clarity, indicated generally by the reference numeral 60. In the condensed graph shown in Figure 6, there is an arc 41 joining the output port 35 of node A to the input port 37 of node B. A problem arises when we wish to partition this condensed graph over a plurality of machines. As each node is arbitrarily complex, node B 33 may place a significant computational burden on a processor. In the embodiment shown, we will assume that node B is complex and that it is desirable to send node B 33 to a remote machine to execute. The question arises, how should node A be handled? Heretofore, two methods existed for treating node A. The first method consisted of executing node A and sending the result of the execution of node A to node B so that node B had the complete operand data. The second method consisted of creating a copy of node A and sending the copy of node A to the remote machine with node B. Although these permit the implementation of distributed computing and like techniques to be applied to the algorithms represented by the condensed graph, neither approach of these comply with the Condensed Graphs model and caused many problems. For example, such approaches prevented parallelism and forced eager paradigm into the condensed graphs model. The approaches caused stemming as all condensed node operands should be non-strict. These in turn eliminated laziness in the condensed graphs model which resulted in the reduction of the condensed graphs model to a purely dataflow computing model.
Referring to Figures 7 and 8, there is shown a pair of machines with the condensed graph shown in Figure 6 partitioned therebetween and implementing the method of processing a condensed graph according to the present invention, where like parts have been given the same reference numeral as before. Referring specifically to Figure 7, node A 31 is located on a first machine 81 whereas node B 33 is located on a remote second machine 83. For the non-strict operand port 37 of node B 33, a node A proxy 85 is created by the second machine. The condensed graph is processed until node B attempts to fire. When node B attempts to fire, an execution platform (not shown) on the second, remote machine 83 intercepts the execution of the node A proxy 85 and sends a message to the first machine 81 that it requires a result from node A 31.
On receiving the message that node B 33 requires a result from node A 31, the first machine 81 sets up a node A proxy proxy 87, as shown in Figure 8. Then, Node A 31 executes (as it now has a destination) and passes its result to the node A proxy proxy 87. The node A proxy proxy 87 is then fireable and its execution is intercepted by an execution platform (not shown) on the first machine 81. The operand to the node A proxy proxy 87 is sent to the second machine 83 and accordingly the result from Node A 31 is effectively sent to the remote, second machine 83. This is possible due to the fact that the destination machine information, in other words the machine where information must be transmitted in due course, is stored in the respective node proxy when it is created.
The remote second machine 83 injects the result into the condensed graph as the destination node of the node A port proxy 85 and this causes Node B 33 to get that result and graph execution continues as normal.
In practice, an engine module (not shown) of a WebCom application running on the second machine will intercept a node grafting onto the destination port of the node A proxy 85 and the subsequent execution of the node A proxy. This will cause the second machine to send the instruction to the first machine to execute the original node A. Furthermore, this in turn will cause the first machine to create a node A proxy proxy on the first machine. An engine module of the first machine will intercept the execution of the node A proxy proxy and will transmit the result of the node A proxy proxy to the second machine where the result will be populated into the destination port of the node A proxy. The Engine module effectively contains a triple manager of a WebCom application.
The second machine knows where to send the instruction due to the fact that the node A proxy contains as its operands the machine identifier (machine ID) of where the originating node to which the proxy relates is located as well as a node port proxy Id. The node port proxy id is effectively a string that is used to identify a Java reference which is stored in a look-up table with that string on the first machine and the Java reference related to the destination port of node A on the first machine. When the node A proxy attempts to execute, the string is transmitted to the first machine, the table is looked up by the first machine and the Java reference to which the string relates is retrieved and used in the execution of node A. The table of port references is a very simple two column table of unique strings corresponding to existing Port object references. Since these references are not portable between machines, we use the string version instead.
When a node, in this case node B, is being transferred to a remote machine, effectively a set of instructions to create node B on the remote machine are sent to the remote machine including values for the operand ports of node B. However, the Java reference of the destination port of node A which feeds into node B cannot be transmitted as a Java reference to the remote machine. Instead, a special object, called a destination port proxy object (DPPO) is created for the Java reference. The DPPO contains a string. The string is also stored in a table on the first machine along with the Java reference so that the Java reference may be retrieved in due course using the string. When the second -12-machine receives the set of instructions to create node B, it effectively builds the node using the instructions. As it goes through the instructions it will come across the DPPO.
Once the second machine comes across the DPPO, it will create a node proxy for the operand port of the node B which in the case described above is the node A proxy. The node A proxy will have as its operand port inputs the machine ID of the first machine from which the set of instructions to build node B came from and the string. The use of a string is not absolutely essential and another identifier such as an integer value could be used. The important aspect is to allow referencing of the table on the first machine in due course.
Similar to the above, when a node A proxy proxy is created on the first machine, the node A proxy proxy will contain as its operands the second machine ID as well as a port proxy ID for the destination port of the node A proxy. The port proxy ID will contain a string representative of a reference to the destination port of the node A proxy. The above example works for Java programming however it would also be effective for other object oriented programming languages. In other examples other than the object oriented programming implementation described, it may be possible to reference the memory location in which case the memory location of the port may be passed between the machines.
It is envisaged that in one embodiment, it may be possible to have the situation where node A is not asked to execute at all, but is passed on to a node sent to a third machine.
n this case, a new DPPO is created to refer to the node proxy's destination port. If the Uuird machine required the result, we would end up with two invocations of this mechanism: machine 3 asks machine 2 for the result, causing machine 2 to ask machine 1 for the result. An optimisation would be to see that we are sending out a reference to an "unevaluated" node proxy and just reuse the original DPPO object and send that to machine 3 as the operand value.
In the current example, the simple partition of a condensed graph over a pair of machines is illustrated however it will be understood how the same principles apply when partitioning the condensed graph over more than two machines. It can be seen V-iat this procedure maintains the dependencies that were expressed in the original graph, and causes no duplication of node execution. It will be further understood that the -13-execution of node A proxy could be requested by another node (not shown) that has the destination port of node A proxy as its operand port but in the example shown only one node, Node B, has been shown for purposes of clarity.
Referring to Figure 9, there is shown a diagrammatic representation of a condensed graphs machine used to implement the method according to the present invention. The condensed graphs machine, indicated generally by the reference numeral 91, comprises a Triple Manager 93 and an Ancillary Processor 95 connected by a communication network 97. Together these components interact to perform the WebCom application.
he WebCom application is a module distributed computing platform which is responsible for taking a fireable node, locating a suitable site for execution of that node and transporting the result of that execution back to the Condensed Graph for propagation to that node's destinations. Performing this task requires complex processes in the areas of load balancing, scheduling, maintaining network communications, resource discovery, security and fault tolerance and the like. In the embodiment shown, only one ancillary processor 95 is shown however it will be understood that many ancillary processors may be incorporated into the condensed graphs machine. The network connecting the triple manager and the ancillary processor is preferably the internet or other network that facilitates communications between remote devices. The triple manager 93 constructs computational triples which form the instructions of the ondensed graphs machine and some of the instructions may be executed in the triple manager. Other instructions will be executed in the ancillary processor 95.
Referring to Figure 10, there is shown a diagrammatic representation of a Triple Manager showing the fundamental components thereof. The triple manager, indicated by the reference numeral 100, comprises a Triple construction process 101, a definition graph memory 103, a V-graph memory 105 and a triple list 107. The definition graph memory is used to hold the definitions of the condensation sequence of the condensed graphs. The triple construction process 101 copies the required definitions in the definition graph memory 103 into the V-graph memory 105 to dynamically extend the V-graph as the computation proceeds. The V-graph memory 105 contains a representation of a V-graph and so reflects the current state of computation. Finally, the triple list contains all fireable condensed graphs and it is rebuilt each time step by the triple construction process.
The above description of Figure 10 describes one implementation of the present invention. In other implementations using object oriented programming, for example a Java implementation, the definition graph memory 103 can be substituted using Java classes of XML definitions. Furthermore, the standard object oriented instantiation procedure can carry out the function of the triple construction process. Finally, the triple list is not necessarily handled in the triple manager but instead is handled by the engine module of a WebCom application instance.
Referring to Figures 11 and 12, there is shown a pair of graphical representations of a Generator/Consumer application associated with a WebCom application that implements the present invention. WebCom is the name given to the application that facilitates the processing of condensed graphs as described above. The Generator/Consumer application is effectively any application that utilises the WebCom application and the condensed graphs processing model to process data of the generator/consumer applications choosing. The generator/consumer application, indicated by the reference numeral 111 is connected to a backplane 113 of the WebCom application. The backplane 113 is the minimal WebCom configuration and it provides a mechanism for passing work and results to and from the WebCom application and has an associated user interface through which it may be instantiated. The backplane is associated with the generator/consumer application by application interface hooks that enable the generator/consumer application to act as an alternative to the graphical user interface (GUI) and command line interface of the WebCom application. The generator/consumer applications can therefore create work for the WebCom application and consume the results.
At start-up, the backplane may load numerous disparate WebCom modules according to a configuration file (not shown) that specifies Java class names for each module.
Alternatively, or in addition to the above, the backplane module may dynamically load and unload modules at runtime as desired. The backplane also allows modules to communicate with each other. In addition to the backplane, other modules include an engine module, a communications manager, a load balancing module, a fault tolerance module and a security module. The engine module in the case of the present invention is a condensed graphs module to allow the execution of condensed graphs and the distribution of condensed graphs over a plurality of remote machines. Referring specifically to Figure 12, there is shown a backplane 113 with a compute engine 115 loaded therein. A communications manager (not shown) could also be provided to facilitate communications between other machines running a WebCom application and a condensed graphs engine module to allow for the condensed graphs to be processed over a plurality of machines.
A more thorough description of the WebCom application and the integration of condensed graphs in WebCom applications is provided in the published thesis paper entitled "Design and Implementation of an N-Tier Metacomputer with Decentralised Fault Tolerance" by James J. Kennedy, a co-inventor of the application in suit, and its entire contents are incorporated herein by way of reference. In particular, the content regarding condensed graphs, the WebCom application, message passing between elements of the WebCom application, message passing between remote machines and the functionality and operation of the WebCom application is incorporated herein.
It will be understood that the present invention may be implemented largely in software.
Therefore, the invention also extends to computer programs, particularly to computer programs on or in a camer, adapted for putting the invention into practice. The program may be in the form of source code, object code or code intermediate source and object code. The program may be stored on a carrier such as any known computer readable medium such as a floppy disc, ROM, CD ROM or DVD, memory stick, flash drive or the like. The carrier may be a transmissible carrier for when the program code is transmitted electronically or downloaded or uploaded through the internet such as an electrical or optical signal, which may be conveyed via electrical or optical cable or by radio, satellite or other means. When the program is embodied on a signal, which may be conveyed directly by a cable or other device, the carrier may be constituted by such a cable or other device means. It is further envisaged that the computer program may be stored in an integrated circuit. Furthermore, the invention may also be embodied in hardware such as on a dedicated electronic circuit comprises a plurality of circuit components, a dedicated chip, a field programmable gate array (FPGA) or like device.
In this specification the terms "comprise, comprises, comprised and comprising" and the terms "include, includes, included and including" are all deemed totally interchangeable and should be afforded the widest possible interpretation.
The invention is in no way limited to the embodiment hereinbefore described but may be varied in both construction and detail within the scope of the specification.

Claims (15)

  1. Claims 1) A method of processing a condensed graph spread over a plurality of machines, the condensed graph comprising a plurality of nodes including a first node located on a first machine and a second node located on a second machine, the second node having an operand port supplied by the destination port of the first node, the method comprising: creating a first node destination port proxy object for the operand port of the second node and sending that first node destination port proxy object to the second machine; the second machine generating a first node proxy for the first node with the first node destination port proxy object; the first node proxy receiving a request to execute, the first node proxy transmitting an instruction to the first machine to execute the first node; the first machine receiving the request, creating a first node proxy proxy, providing the first node with the first node proxy proxy as a destination value, executing the first node and passing the result of the execution of the first node to the first node proxy proxy; the first node proxy proxy executing and thereafter transmitting the result of the execution of the first node to the first node proxy on the second machine; and populating the destination port of the first node proxy with the result from the first node.
  2. 2) A method as claimed in claim 1 in which the method further comprises the step of storing first machine information in the first node proxy.
  3. 3) A method as claimed in claim 2 in which the step of storing first machine information in the first node proxy comprises storing the name of the first machine as an operand of the first node proxy.
  4. 4) A method as claimed in any preceding claim in which the method further comprises the step of storing a first node port information in the first node proxy.
  5. 5) A method as claimed in claim 4 in which the step of storing the first node port information in the first node proxy comprises storing a port proxy identifier as an operand of the first node proxy.
  6. 6) A method as claimed in any preceding claim further comprising the step of storing second machine information in the first node proxy proxy.
  7. 7) A method as claimed in claim 6 in which the step of storing second machine information in the first node proxy proxy comprises storing the name of the second machine as an operand of the first node proxy proxy.
  8. 8) A method as claimed in any preceding claim in which the method further comprises the step of storing a first node proxy port information in the first node proxy proxy.
  9. 9) A method as claimed in claim 8 in which the step of storing the first node proxy port information in the first node proxy proxy comprises storing a port proxy identifier as an operand of the first node proxy proxy.
  10. 10) A method as claimed in any preceding claim in which the first node destination port proxy object comprises a port proxy identifier representative of a first machine destination port reference and the method comprises the step of storing the port proxy identifier in a table with the first machine destination port reference.
  11. 11) A computer program product having program instructions for causing a computer to perform the method of any of claims 1 to 10.
  12. 12) A method of partitioning a condensed graph amongst a plurality of machines, the condensed graph comprising a plurality of nodes including a first node whose destination port is connected to an operand port of a second node, the method comprising the steps of: transmitting the second node to the remote machine; and creating a first node proxy on the second machine.
  13. 13) A method as claimed in claim 12 in which the method further comprises the step of creating a first node proxy proxy on the first machine for the destination port of the first node.
  14. 14) A computer program product having program instructions for causing a computer to perform the method of claim 12 or 13.
  15. 15) A condensed graph machine comprising a Triple manager and at least one ancillary processor, the condensed graph machine running computer program code having program instructions for causing the condensed graph machine to operate according to any of claims ito 14.
GB0904956A 2009-03-24 2009-03-24 Processing a condensed graph on multiple machines using proxies Withdrawn GB2468859A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0904956A GB2468859A (en) 2009-03-24 2009-03-24 Processing a condensed graph on multiple machines using proxies
PCT/EP2010/053852 WO2010108967A2 (en) 2009-03-24 2010-03-24 A method of processing a condensed graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0904956A GB2468859A (en) 2009-03-24 2009-03-24 Processing a condensed graph on multiple machines using proxies

Publications (2)

Publication Number Publication Date
GB0904956D0 GB0904956D0 (en) 2009-05-06
GB2468859A true GB2468859A (en) 2010-09-29

Family

ID=40640000

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0904956A Withdrawn GB2468859A (en) 2009-03-24 2009-03-24 Processing a condensed graph on multiple machines using proxies

Country Status (2)

Country Link
GB (1) GB2468859A (en)
WO (1) WO2010108967A2 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0977157A2 (en) * 1998-07-31 2000-02-02 Sony United Kingdom Limited Animation of video special effects
WO2006108850A2 (en) * 2005-04-14 2006-10-19 International Business Machines Corporation Multi-level cache apparatus and method for enhanced remote invocation performance
US20070080964A1 (en) * 2005-10-07 2007-04-12 Florian Kainz Method of utilizing product proxies with a dependency graph
US20070282863A1 (en) * 2006-05-30 2007-12-06 Schacher Ritchard L Method, system, and program product for providing proxies for data objects

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4814978A (en) * 1986-07-15 1989-03-21 Dataflow Computer Corporation Dataflow processing element, multiprocessor, and processes
EP1152331B1 (en) * 2000-03-16 2017-11-22 Kabushiki Kaisha Square Enix (also trading as Square Enix Co., Ltd.) Parallel task processing system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0977157A2 (en) * 1998-07-31 2000-02-02 Sony United Kingdom Limited Animation of video special effects
WO2006108850A2 (en) * 2005-04-14 2006-10-19 International Business Machines Corporation Multi-level cache apparatus and method for enhanced remote invocation performance
US20070080964A1 (en) * 2005-10-07 2007-04-12 Florian Kainz Method of utilizing product proxies with a dependency graph
US20070282863A1 (en) * 2006-05-30 2007-12-06 Schacher Ritchard L Method, system, and program product for providing proxies for data objects

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Lichun Ji, Deters R; Coordination & Enterprise Wide P2P Computing *

Also Published As

Publication number Publication date
GB0904956D0 (en) 2009-05-06
WO2010108967A2 (en) 2010-09-30
WO2010108967A3 (en) 2011-02-10

Similar Documents

Publication Publication Date Title
US9898278B2 (en) Release and management of composite applications on PaaS
US10326708B2 (en) Cloud computing services framework
US20190095176A1 (en) Managing Interfaces for Sub-Graphs
US20230004433A1 (en) Data processing method and apparatus, distributed data flow programming framework, and related component
CN105242962B (en) The quick triggering method of lightweight thread based on isomery many-core
US20120222019A1 (en) Control Flow Graph Operating System Configuration
US10282200B2 (en) Out-of-deployment-scope modification of information-technology application using lifecycle blueprint
US8027817B2 (en) Simulation management within a grid infrastructure
US20240220220A1 (en) Method, apparatus, and computer-readable medium for intelligent execution of a solution on a computer network
Sridhar et al. Dynamic module replacement in distributed protocols
US9135025B2 (en) Launcher for software applications
CN110213333B (en) Task processing method and device, electronic equipment and computer readable medium
GB2468859A (en) Processing a condensed graph on multiple machines using proxies
Glatard et al. Generic web service wrapper for efficient embedding of legacy codes in service-based workflows
Aridor et al. Open job management architecture for the Blue Gene/L supercomputer
US20050193393A1 (en) System and method for generalized imaging utilizing a language agent and encapsulated object oriented polyphase preboot execution and specification language
Stevenson et al. Smart proxies in java rmi with dynamic aspect-oriented programming
Morrison et al. The role of XML within the WebCom metacomputing platform
US11216259B1 (en) Performing multiple functions in single accelerator program without reload overhead in heterogenous computing system
US20050193371A1 (en) Encapsulated object oriented polyphase preboot execution and specification language
Motoyama et al. Method for Transparent Transformation of Monolithic Programs into Microservices for Effective Use of Edge Resources
Metzner et al. Parallelism in MuPAD 1.4
Brandberg et al. Enabling Flow Preservation and Portability in Multicore Implementations of Simulink Models
Healy et al. ARC: a metacomputing environment for clusters augmented with reconfigurable hardware
CN113032094A (en) CAD containerization method and device and electronic equipment

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)