AU2005236085B2 - Modified computer architecture with initialization of objects - Google Patents

Modified computer architecture with initialization of objects Download PDF

Info

Publication number
AU2005236085B2
AU2005236085B2 AU2005236085A AU2005236085A AU2005236085B2 AU 2005236085 B2 AU2005236085 B2 AU 2005236085B2 AU 2005236085 A AU2005236085 A AU 2005236085A AU 2005236085 A AU2005236085 A AU 2005236085A AU 2005236085 B2 AU2005236085 B2 AU 2005236085B2
Authority
AU
Australia
Prior art keywords
computers
computer
application program
initialization routine
loading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2005236085A
Other versions
AU2005236085A1 (en
Inventor
John Matthew Holt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Waratek Pty Ltd
Original Assignee
Waratek Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2004902146A external-priority patent/AU2004902146A0/en
Application filed by Waratek Pty Ltd filed Critical Waratek Pty Ltd
Priority to AU2005236085A priority Critical patent/AU2005236085B2/en
Priority claimed from PCT/AU2005/000578 external-priority patent/WO2005103924A1/en
Publication of AU2005236085A1 publication Critical patent/AU2005236085A1/en
Application granted granted Critical
Publication of AU2005236085B2 publication Critical patent/AU2005236085B2/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Devices For Executing Special Programs (AREA)

Description

WO 2005/103924 PCTiAU2005/000578 MODIFIED COMPUTER ARCHITECTURE WITH INITIALIZATION OF
OBJECTS
Field of the Invention The present invention relates to computers and, in particular, to a modified machine architecture which enables the operation of an application program simultaneously on a plurality of computers interconnected via a communications network.
Background Art Ever since the advent of computers, and computing, software for computers has been written to be operated upon a single machine. As indicated in Fig. 1, that single prior art machine 1 is made up from a central processing unit, or CPU, 2 which is connected to a memory 3 via a bus 4. Also connected to the bus 4 are various other functional units of the single machine 1 such as a screen 5, keyboard 6 and mouse 7.
A fundamental limit to the performance of the machine 1 is that the data to be manipulated by the CPU 2, and the results of those manipulations, must be moved by the bus 4. The bus 4 suffers from a number of problems including so called bus "queues" formed by units wishing to gain an access to the bus, contention problems, and the like. These problems can, to some extent, be alleviated by various stratagems including cache memory, however, such stratagems invariably increase the administrative overhead of the machine 1.
Naturally, over the years various attempts have been made to increase machine performance. One approach is to use symmetric multiple processors. This prior art approach has been used in so called "super" computers and is schematically indicated in Fig. 2. Here a plurality of CPU's 12 are connected to global memory 13. Again, a bottleneck arises in the communications between the CPU's 12 and the memory 13.
This process has been termed "Single System Image". There is only one application and one whole copy of the memory for the application which is distributed over the global memory. The single application can read from and write to, (ie share) any memory location completely transparently.
WO 2005/103924 PCTiAU20051000578 Where there are a number of such machines interconnected via a network, this is achieved by taking the single application written for a single machine and partitioning the required memory resources into parts. These parts are then distributed across a number of computers to form the global memory 13 accessible by all CPU's 12. This procedure relies on masking, or hiding, the memory partition from the single running application program. The performance degrades when one CPU on one machine must access (via a network) a memory location physically located in a different machine.
Although super computers have been technically successful in achieving high computational rates, they are not commercially successful in that their inherent complexity makes them extremely expensive not only to manufacture but to administer. In particular, the single system image concept has never been able to scale over "commodity" (or mass produced) computers and networks. In particular, the Single System Image concept has only found practical application on very fast (and hence very expensive) computers interconnected by very fast (and similarly expensive) networks.
A further possibility of increased computer power through the use of a plural number of machines arises from the prior art concept of distributed computing which is schematically illustrated in Fig. 3. In this known arrangement, a single application program (Ap) is partitioned by its author (or another programmer who has become familiar with the application program) into various discrete tasks so as to run upon, say, three machines in which case n in Fig. 3 is the integer 3. The intention here is that each of the machines M1...M3 runs a different third of the entire application and the intention is that the loads applied to the various machines be approximately equal.
The machines communicate via a network 14 which can be provided in various forms such as a communications link, the internet, intranets, local area networks, and the like. Typically the speed of operation of such networks 14 is an order of magnitude slower than the speed of operation of the bus 4 in each of the individual machines M1, M2, etc.
Distributed computing suffers from a number of disadvantages. Firstly, it is a difficult job to partition the application and this must be done manually. Secondly, PCT/AU2005/000578 Received 16 September 2005 communicating data, partial results, results and the like over the network 14 is an administrative overhead. Thirdly, the need for partitioning makes it extremely difficult to scale upwardly by utilising more machines since the application having been partitioned into, say three, does not run well upon four machines. Fourthly, in the event that one of the machines should become disabled, the overall performance of the entire system is substantially degraded.
A further prior art arrangement is known as network computing via "clusters" as is schematically illustrated in Fig. 4. In this approach, the entire application is loaded onto each of the machines Ml, M2 Mn. Each machine communicates with a common database but does not communicate directly with the other machines.
Although each machine runs the same application, each machine is doing a different "job" and uses only its own memory. This is somewhat analogous to a number of windows each of which sell train tickets to the public. This approach does operate, is scalable and mainly suffers from the disadvantage that it is difficult to administer the network.
In computer languages such as JAVA and MICROSOFT.NET there are two major types of constructs with which programmers deal. In the JAVA language these are known as objects and classes. Every time an object is created there is an initialization routine run known as Similarly, every time a class is loaded there is an initialization routine known as "<clinit>". Other languages use different terms but utilize a similar concept.
The present invention discloses a computing environment in which an application program operates simultaneously on a plurality of computers. In such an environment it is necessary to ensure that the abovementioned initialization routines operate in a consistent fashion across all the machines. it is this goal of consistent initialization that is the genesis of the present invention.
In accordance with a first aspect of the present invention there is disclosed a multiple computer system having at least one application program each written to operate on only a single computer but running simultaneously on a plurality of computers interconnected by a communications network, wherein different portions 3 5027C-WO AMENDED SHEET
IPEA/AU
PCT/AU2005/000578 Received 16 September 2005 of said application program(s) execute substantially simultaneously on different ones of said computers and for each said portion a like plurality of substantially identical objects are created, each in the corresponding computer and each having a substantially identical name, and wherein the initial contents of each of said identically named objects is substantially the same.
In accordance with a second aspect of the present invention there is disclosed a plurality of computers interconnected via a communications link and simultaneously operating at least one application program each written to operation on only a single computer wherein each said computer substantially simultaneously executes a different portion of said application program(s), each said computer in operating its application program portion creates objects only in local memory physically located in each said computer, the contents of the local memory utilized by each said computer are fundamentally similar but not, at each instant, identical, and every one of said computers has distribution update means to distribute to all other said computers objects created by said one computer.
In accordance with a third aspect of the present invention there is disclosed a method of running simultaneously on a plurality of computers at least one application program each written to operate on only a single computer, said computers being interconnected by means of a communications network, said method comprising the steps of:: executing different portions of said application program(s) on different ones of said computers and for each said portion creating a like plurality of substantially identical objects each in the corresponding computer and each having a substantially identical name, and (ii) creating the initial contents of each of said identically named objects substantially the same.
In accordance with a fourth aspect of the present invention there is disclosed a method of compiling or modifying an application program written to operate on only a single computer to have different portions thereof to execute substantially simultaneously on different ones of a plurality of computers interconnected via a communications link, said method comprising the steps of: 4 5027C-WO AMENDED SHEET
IPENA/AU
PCT/AU2005/000578 Received 16 September 2005 detecting instructions which create objects utilizing one of said computers, (ii) activating an initialization routine following each said detected object creation instruction, said initialization routine forwarding each created object to the remainder of said computers.
In accordance with a fifth aspect of the present invention there is disclosed a multiple thread processing computer operation in which individual threads of a single application program written to operate on only a single computer are simultaneously being processed each on a different corresponding one of a plurality of computers interconnected via a communications link, the improvement comprising communicating objects created in local memory physically associated with the computer processing each thread to the local memory of each other said computer via said communications link.
In accordance with a sixth aspect of the present invention there is disclosed a method of ensuring consistent initialization of an application program written to operate on only a single computer but different portions of which are to be executed simultaneously each on a different one of a plurality of computers interconnected via a communications network, said method comprising the steps of: scrutinizing said application program at, or prior to, or after loading to detect each program step defining an initialization routine, and (ii) modifying said initialization routine to ensure consistent operation of all said computers.
In accordance with a seventh aspect of the present invention there is disclosed a computer program written to operate on only a single computer but product comprising a set of program instructions stored in a storage medium and operable to permit a plurality of computers to carry out the abovementioned methods.
Brief Description of the Drawings Embodiments of the present invention will now be described with reference to the drawings in which: Fig. I is a schematic view of the internal architecture of a conventional 5027C-WO AMENDED
SHEET
IPEA/AU
WO 2005/103924 PCTiAU20051000578 computer, Fig. 2 is a schematic illustration showing the internal architecture of known symmetric multiple processors, Fig. 3 is a schematic representation of prior art distributed computing, Fig. 4 is a schematic representation of a prior art network computing using clusters, Fig. 5 is a schematic block diagram of a plurality of machines operating the same application program in accordance with a first embodiment of the present invention, Fig. 6 is a schematic illustration of a prior art computer arranged to operate JAVA code and thereby constitute a JAVA virtual machine, Fig. 7 is a drawing similar to Fig. 6 but illustrating the initial loading of code in accordance with the preferred embodiment, Fig. 8 is a drawing similar to Fig. 5 but illustrating the interconnection of a plurality of computers each operating JAVA code in the manner illustrated in Fig. 7, Fig. 9 is a flow chart of the procedure followed during loading of the same application on each machine in the network, Fig. 10 is a flow chart showing a modified procedure similar to that of Fig. 9, Fig. 11 is a schematic representation of multiple thread processing carried out on the machines of Fig. 8 utilizing a first embodiment of memory updating, Fig. 12 is a schematic representation similar to Fig. 11 but illustrating an alternative embodiment, Fig. 13 illustrates multi-thread memory updating for the computers of Fig. 8, Fig. 14 is a schematic illustration of a prior art computer arranged to operate in JAVA code and thereby constitute a JAVA virtual machine, Fig. 15 is a schematic representation ofn machines running the application program and serviced by an additional server machine X, Fig. 16 is a flow chart of illustrating the modification of initialization routines, Fig. 17 is a flow chart illustrating the continuation or abortion of initialization routines, Fig. 18 is a flow chart illustrating the enquiry sent to the server machine X, Fig. 19 is a flow chart of the response of the server machine X to the request of Fig. 18, WO 2005/103924 PCT/AU2005/000578 Fig. 20 is a flowchart illustrating a modified initialization routine for the <clinit> instruction, Fig. 21 is a flowchart illustrating a modified initialization routine for the <init> instruction, Fig. 22 is a schematic representation of two laptop computers interconnected to simultaneously run a plurality of applications, with both applications running on a single computer, Fig. 23 is a view similar to Fig. 22 but showing the Fig. 22 apparatus with one application operating on each computer, and Fig. 24 is a view similar to Figs. 22 and 23 but showing the Fig. 22 apparatus with both applications operating simultaneously on both computers.
The specification includes Annexures A and B which provide actual program fragments which implement various aspects of the described embodiments. Annexure A relates to fields and Annexure B relates to initialization.
Detailed Description In connection with Fig. 5, in accordance with a preferred embodiment of the present invention a single application program 50 can be operated simultaneously on a number of machines M1, M2.. .Mn communicating via network 53. As it will become apparent hereafter, each of the machines Ml, M2.. .Mn operates with the same application program 50 on each machine M1, M2.. .Mn and thus all of the machines Ml, M2.. .Mn have the same application code and data 50. Similarly, each of the machines Ml, M2.. .Mn operates with the same (or substantially the same) modifier 51 on each machine M1, M2...Mn and thus all of the machines Ml, M2...Mn have the same (or substantially the same) modifier 51 with the modifier of machine M2 being designated 51/2. In addition, during the loading of, or preceding the execution of, the application 50 on each machine Ml, Mn, each application 50 has been modified by the corresponding modifier 51 according to the same rules (or substantially the same rules since minor optimising changes are permitted within each modifier 51/1 51/n).
As a consequence of the above described arrangement, if each of the machines M1, M2...Mn has, say, a shared memory capability of 10MB, then the total shared WO 2005/103924 PCT/AU20051000578 memory available to each application 50 is not, as one might expect, 10n MB but rather only 10MB. However, how this results in improved operation will become apparent hereafter. Naturally, each machine MI, M2...Mn has an unshared memory capability. The unshared memory capability of the machines M1, M2...Mn are normally approximately equal but need not be.
It is known from the prior art to operate a machine (produced by one of various manufacturers and having an operating system operating in one of various different languages) in a particular language of the application, by creating a virtual machine as schematically illustrated in Fig. 6. The prior art arrangement of Fig. 6 takes the form of the application 50 written in the Java language and executing within a Java Virtual Machine 61. Thus, where the intended language of the application is the language JAVA, a JAVA virtual machine is created which is able to operate code in JAVA irrespective of the machine manufacturer and internal details of the machine.
For further details see "The JAVA Virtual Machine Specification" 2 nd Edition by T.
Lindholm F. Yellin of Sun Microsystems Inc. of the USA.
This well known prior art arrangement of Fig. 6 is modified in accordance with the preferred embodiment of the present invention by the provision of an additional facility which is conveniently termed "distributed run time" or DRT 71 as seen in Fig. 7. In Fig. 7, the application 50 is loaded onto the Java Virtual Machine 72 via the distributed runtime system 71 through the loading procedure indicated by arrow 75. A distributed run time system is available from the Open Software Foundation under the name of Distributed Computing Environment (DCE). In particular, the distributed runtime 71 comes into operation during the loading procedure indicated by arrow 75 of the JAVA application 50 so as to initially create the JAVA virtual machine 72. The sequence of operations during loading will be described hereafter in relation to Fig. 9.
Fig. 8 shows in modified form the arrangement of Fig. 5 utilising JAVA virtual machines, each as illustrated in Fig. 7. It will be apparent that again the same application 50 is loaded onto each machine M1, M2...Mn. However, the communications between each machine Ml, M2...Mn, and indicated by arrows 83, although physically routed through the machine hardware, are controlled by the WO 2005/103924 PCT/AU2005/000578 individual DRT's 71/1...71/n within each machine. Thus, in practice this may be conceptionalised as the DRT's 71/1...71/n communicating with each other via the network 73 rather than the machines Ml, M2...Mn themselves.
Turning now to Figs. 7 and 9, during the loading procedure 75, the program being loaded to create each JAVA virtual machine 72 is modified. This modification commences at 90 in Fig. 9 and involves the initial step 91 of detecting all memory locations (termed fields in JAVA but equivalent terms are used in other languages) in the application 50 being loaded. Such memory locations need to be identified for subsequent processing at steps 92 and 93. The DRT 71 during the loading procedure 75 creates a list of all the memory locations thus identified, the JAVA fields being listed by object and class. Both volatile and synchronous fields are listed.
The next phase (designated 92 in Fig. 9) of the modification procedure is to search through the executable application code in order to locate every processing activity that manipulates or changes field values corresponding to the list generated at step 91 and thus writes to fields so the value at the corresponding memory location is changed. When such an operation (typically putstatic or putfield in the JAVA language) is detected which changes the field value, then an "updating propagation routine" is inserted by step 93 at this place in the program to ensure that all other machines are notified that the value of the field has changed. Thereafter, the loading procedure continues in a normal way as indicated by step 94 in Fig. 9.
An alternative form of initial modification during loading is illustrated in Fig. 10. Here the start and listing steps 90 and 91 and the searching step 92 are the same as in Fig. 9. However, rather than insert the "updating propagation routine" as in step 93 in which the processing thread carries out the updating, instead an "alert routine" is inserted at step 103. The "alert routine" instructs a thread or threads not used in processing and allocated to the DRT, to carry out the necessary propagation.
This step 103 is a quicker alternative which results in lower overhead.
Once this initial modification during the loading procedure has taken place, then either one of the multiple thread processing operations illustrated in Figs. 11 and 12 takes place. As seen in Fig. 11, multiple thread processing 110 on the machines consisting of threads 111/1... 111/4 is occurring and the processing of the second WO 2005/103924 PCTiAU2005/000578 thread 111/2 (in this example) results in that thread 111/2 becoming aware at step 113 of a change of field value. At this stage the normal processing of that thread 111/2 is halted at step 114, and the same thread 111/2 notifies all other machines M2.. .Mn via the network 53 of the identity of the changed field and the changed value which occurred at step 113. At the end of that communication procedure, the thread 111/2 then resumes the processing at step 115 until the next instance where there is a change of field value.
In the alternative arrangement illustrated in Fig. 12, once a thread 121/2 has become aware of a change of field value at step 113, it instructs DRT processing 120 (as indicated by step 125 and arrow 127) that another thread(s) 121/1 allocated to the DRT processing 120 is to propagate in accordance with step 128 via the network 53 to all other machines M2.. .Mn the identity of the changed field and the changed value detected at step 113. This is an operation which can be carried out quickly and thus the processing of the initial thread 111/2 is only interrupted momentarily as indicated in step 125 before the thread 111/2 resumes processing in step 115. The other thread 121/1 which has been notified of the change (as indicated by arrow 127) then communicates that change as indicated in step 128 via the network 53 to each of the other machines M2.. .Mn.
This second arrangement of Fig. 12 makes better utilisation of the processing power of the various threads 111/1... 111/3 and 121/1 (which are not, in general, subject to equal demands) and gives better scaling with increasing size of"n", (n being an integer greater than or equal to 2 which represents the total number of machines which are connected to the network 53 and which run the application program 50 simultaneously). Irrespective of which arrangement is used, the changed field and identities and values detected at step 113 are propagated to all the other machines M2.. .Mn on the network.
This is illustrated in Fig. 13 where the DRT 71/1 and its thread 121/1 of Fig.
12 (represented by step 128 in Fig. 13) sends via the network 53 the identity and changed value of the listed memory location generated at step 113 of Fig. 12 by processing in machine M1, to each of the other machines Mn.
WO 2005/103924 PCTiAU2005/000578 Each of the other machines M2...Mn carries out the action indicated by steps 135 and 136 in Fig. 13 for machine Mn by receiving the identity and value pair from the network 53 and writing the new value into the local corresponding memory location.
In the prior art arrangement in Fig. 3 utilising distributed software, memory accesses from one machine's software to memory physically located on another machine are permitted by the network interconnecting the machines. However, such memory accesses can result in delays in processing of the order of 106 107 cycles of the central processing unit of the machine. This in large part accounts for the diminished performance of the multiple interconnected machines.
However, in the present arrangement as described above in connection with Fig. 8, it will be appreciated that all reading of data is satisfied locally because the current value of all fields is stored on the machine carrying out the processing which generates the demand to read memory. Such local processing can be satisfied within 10 2 10 3 cycles of the central processing unit. Thus, in practice, there is substantially no waiting for memory accesses which involves reads.
However, most application software reads memory frequently but writes to memory relatively infrequently. As a consequence, the rate at which memory is being written or re-written is relatively slow compared to the rate at which memory is being read. Because of this slow demand for writing or re-writing of memory, the fields can be continually updated at a relatively low speed via the inexpensive commodity network 53, yet this low speed is sufficient to meet the application program's demand for writing to memory. The result is that the performance of the Fig. 8 arrangement is vastly superior to that of Fig. 3.
In a further modification in relation to the above, the identities and values of changed fields can be grouped into batches so as to further reduce the demands on the communication speed of the network 53 interconnecting the various machines.
It will also be apparent to those skilled in the art that in a table created by each DRT 71 when initially recording the fields, for each field there is a name or identity which is common throughout the network and which the network recognises.
WO 2005/103924 PCT/AU20051000578 However, in the individual machines the memory location corresponding to a given named field will vary over time since each machine will progressively store changed field values at different locations according to its own internal processes. Thus the table in each of the DRTs will have, in general, different memory locations but each global "field name" will have the same "field value" stored in the different memory locations.
It will also be apparent to those skilled in the art that the abovementioned modification of the application program during loading can be accomplished in up to five ways by: re-compilation at loading, (ii) by a pre-compilation procedure prior to loading, (iii) compilation prior to loading, (iv) a "just-in-time" compilation, or re-compilation after loading (but, or for example, before execution of the relevant or corresponding application code in a distributed environment).
Traditionally the term "compilation" implies a change in code or language, eg from source to object code or one language to another. Clearly the use of the term "compilation" (and its grammatical equivalents) in the present specification is not so restricted and can also include or embrace modifications within the same code or language.
In the first embodiment, a particular machine, say machine M2, loads the application code on itself, modifies it, and then loads each of the other machines Ml, M3 Mn (either sequentially or simultaneously) with the modified code. In this arrangement, which may be termed "master/slave", each of machines Ml, M3, Mn loads what it is given by machine M2.
In a still further embodiment, each machine receives the application code, but modifies it and loads the modified code on that machine. This enables the modification carried out by each machine to be slightly different being optimized based upon its architecture and operating system, yet still coherent with all other similar modifications.
WO 2005/103924 PCT/AU20051000578 In a further arrangement, a particular machine, say M1, loads the unmodified code and all other machines M2, M3 Mn do a modification to delete the original application code and load the modified version.
In all instances, the supply can be branched (ie M2 supplies each of M1, M3, M4, etc directly) or cascaded or sequential (ie M2 applies MI which then supplies M3 which then supplies M4, and so on).
In a still further arrangement, the machines M1 to Mn, can send all load requests to an additional machine (not illustrated) which is not running the application program, which performs the modification via any of the aforementioned methods, and returns the modified routine to each of the machines M1 to Mn which then load the modified routine locally. In this arrangement, machines M1 to Mn forward all load requests to this additional machine which returns a modified routine to each machine. The modifications performed by this additional machine can include any of the modifications covered under the scope of the present invention.
Persons skilled in the computing arts will be aware of at least four techniques used in creating modifications in computer code. The first is to make the modification in the original (source) language. The second is to convert the original code (in say JAVA) into an intermediate representation (or intermediate language). Once this conversion takes place the modification is made and then the conversion is reversed.
This gives the desired result of modified JAVA code.
The third possibility is to convert to machine code (either directly or via the abovementioned intermediate language). Then the machine code is modified before being loaded and executed. The fourth possibility is to convert the original code to an intermediate representation, which is then modified and subsequently converted into machine code.
The present invention encompasses all four modification routes and also a combination of two, three or even all four, of such routes.
WO 2005/103924 PCT/AU20051000578 Turning now to Fig. 14, there is illustrated a schematic representation of a single prior art computer operated as a JAVA virtual machine. In this way, a machine (produced by any one of various manufacturers and having an operating system operating in any one of various different languages) can operate in the particular language of the application program 50, in this instance the JAVA language. That is, a JAVA virtual machine 72 is able to operate code 50 in the JAVA language, and utilize the JAVA architecture irrespective of the machine manufacturer and the internal details of the machine.
In the JAVA language, the initialization routine <clinit> happens only once when a given class file 50A is loaded. However, the initialization routine <init> happens often, for example every time a new object 50X, 50Y and 50Z is created. In addition, classes are loaded prior to objects so that in the application program illustrated in Fig. 14, having a single class 50A and three objects 50X-50Z, the first class 50A is loaded first, then the first object 50X is loaded, then second object 50Y is loaded and finally third object 50Z is loaded. Where, as in Fig. 14, there is only a single computer or machine 72, then no conflict or inconsistency arises in the running of the initialization routines intended to operate during the loading procedure.
However, in the arrangement illustrated in Fig. 8, (and also in Figs. 22-24), a plurality of individual computers or machines M1, M2 Mn are provided each of which are interconnected via a communications network 53 and each of which is provided with a modifier 51 and loaded with a common application program Essentially the modifier 51 is to replicate an identical memory structure and contents on each of the individual machines M1, M2...Mn. It follows therefore that in such a computing environment it is necessary to ensure that each of the individual machines M1, M2...Mn is initialized in a consistent fashion. The modifying function of the modifier 51 of Fig. 5 is provided by the DRT 71 in Fig. 8.
In order to ensure consistent initialization, the application program 50 is scrutinized in order to detect program steps which define an initialization routine.
This scrutiny can take place either prior to loading, or during the loading procedure or even after the loading procedure 75 (but before execution of the relevant WO 2005/103924 PCT/AU20051000578 corresponding application code). It may be likened to a compilation procedure with the understanding that the term compilation normally involves a change in code or language, eg from source to object code or one language to another. However, in the present instance the term "compilation" (and its grammatical equivalents) is not so restricted and can also include embrace modifications within the same code or language.
As a consequence, in the abovementioned scrutiny <clinit> routines are initially looked for and when found a modifying code (typically several instructions) is inserted so as to give rise to a modified <clinit> routine. This modified routine is to load the class 50A on one of the machines, for example JVM#1, and tell all the other machines M2...Mn that such a class 50A exists and its present state. There are several different modes whereby this modification and loading can be carried out.
Thus, in one mode, the DRT 71 on the loading machine, in this example JVM#1, asks the DRT's 71/2...71/n of all the other machines if the first class 50A has already been initialized. If the answer to this question is yes, then the normal initialization procedure is turned off or disabled. If the answer is no, then the normal initialization procedure is operated and the consequential changes brought about during that procedure are transferred to all other machines as indicated by arrows 83 in Fig. 8.
A similar procedure happens on each occasion that an object, say 50X, 50Y or is to be loaded. Where the DRT 71/1 does not discern, as a result of interrogation, that the particular object, say object 50Y, in question has already been loaded onto the other machines M2.. .Mn, then the DRT 71/1 runs the object initialization routine, and loads on each of the other machines Mn an equivalent object (which may conveniently be termed a peer object) together with a copy of the initial values. However, if the DRT 71/1 determines that the object 50Y in question already exists on the other machines, then the normal initialization function is disabled and a local copy is created with a copy of the current values. Again there are various ways of bringing about the desired result.
WO 2005/103924 PCT/AU20051000578 As seen in Fig. 15 a modification to the general arrangement of Fig. 8 is provided in that machines M1, M2...Mn are as before and run the same application program (or programmes) 50 on all machines M1, M2...Mn simultaneously.
However, the previous arrangement is modified by the provision of a server machine X which is conveniently able to supply a housekeeping function, and especially the initialisation of structures, assets and resources. Such a server machine X can be a low value commodity computer such as a PC since its computational load is low. As indicated by broken lines in Fig. 15, two server machines X and X+1 can be provided for redundancy purposes to increase the overall reliability of the system. Where two such server machines X and X+1 are provided, they are preferably operated as dual machines in a cluster. The additional machine X+1 is optional as indicated by the broken lines in Fig. It is not necessary to provide a server machine X as its computational load can be distributed over machines M1, M2...Mn. Alternatively, a database operated by one machine (in a master/slave type operation) can be used for the housekeeping function.
Fig. 16 shows a preferred general procedure to be followed. After a loading step 161 has been commenced, the instructions to be executed are considered in sequence and all initialization routines are detected as indicated in step 162. In the JAVA language these are the <init> and <clinit> routines (or methods in JAVA terminology). Other languages use different terms.
Where an initialization routine is detected in step 162, it is modified in step 163, typically by inserting further instructions into the routine. Alternatively, the modifying instructions could be inserted prior to the routine. Once the modification step 163 has been completed the loading procedure continues, as indicated in step 164.
Fig. 17 illustrates a particular form of modification. After commencing the routine in step 171, the structures, assets or resources (in JAVA termed classes or objects) to be initialised are, in step 172, allocated a name or tag which can be used globally by all machines. This is most conveniently done via a table maintained by WO 2005/103924 PCT/AU2005/000578 server machine X of Fig 15. This table also includes the status of the class or object to be initialised.
As indicated in Fig. 17, if steps 173 and 174 determine that the global name is not already initialised elsewhere (ie on a machine other than the machine carrying out the loading) then this means that the object or class can be initialised in the normal fashion by carrying out step 176 since it is the first such object or class to be created.
However, if steps 173 and 174 determine that the global name is already utilised elsewhere, this means that another machine has already initialised this class or object. As a consequence, the regular initialisation routine is aborted in its entirety, by carrying out step 175.
Fig. 18 shows the enquiry made by the loading machine (one of Ml, M2...Mn) to the server machine X of Fig. 15. The operation of the loading machine is temporarily interrupted as indicated by step 181 until the reply is received from machine X, as indicated by step 182.
Fig. 19 shows the activity carried out by machine X of Fig. 15 in response to such an enquiry as step 181 of Fig. 18. The initialisation status is determined in steps 192 and 193 and, if already initialised, the response to that effect is sent to the enquiring machine by carrying out step 194. Similarly, if the initialisation status is uninitialized, the corresponding reply is sent by carrying out steps 195 and 196. The waiting enquiring machine created by step 182 is then able to respond accordingly.
Reference is made to the accompanying Annexures in which: Annexures A1-A10 illustrate actual code in relation to fields, Annexure B1 is a typical code fragment from an unmodified <clinit> instruction, Annexure B2 is an equivalent in respect of a modified <clinit> instruction, Annexure B3 is a typical code fragment from an unmodified <init> instruction, Annexure B4 is an equivalent in respect of a modified <init> instruction, In addition, Annexure B5 is an alternative to the code of Annexure B2, Annexure B6 is an alternative to the code of Annexure B4.
Furthermore, Annexure B7 is the source-code of InitClient, which queries an "initialization server" for the initialization status of the relevant class or object.
WO 2005/103924 PCT/AU2005/000578 Annexure B8 is the source-code of InitServer, which receives an initialization status query by InitClient and in response returns the corresponding status.
Similarly, Annexure B9 is the source-code of the example application used in the before/after examples of Annexure B -B6.
Turning now to Fig. 20, the procedure followed to modify the <clinit> routine relating to classes so as to convert from the code fragment of Annexure B1 to the code fragment of Annexure B2 is indicated. The initial loading of the application program onto the JAVA virtual machine 72 is commenced at step 201, and each line of code is scrutinized in order to detect those instructions which represent the <clinit> routine by carrying out step 202. Once so detected, the <clinit> routine is modified as indicated in Annexure B2 by carrying out step 203. As indicated by step 204, after the modification is completed the loading procedure is then continued.
Annexures Bl and B2 are the before and after excerpt of a <clinit> instruction respectively. The modified code that is added to the method is highlighted in bold. In the original code sample of Annexure B the <clinit> method creates a new object of itself, and writes this to the memory location (field) called "thisTest". Thus, without management of class loading in a distributed environment, each machine would reinitialise the same shared memory location (field), with different objects. Clearly this is not what the programmer of the application program being loaded expects to happen.
So, taking advantage of the DRT, the application code is modified as it is loaded into the machine by changing the <clinit> method. The changes made (highlighted in bold) are the initial instructions that the <clinit> method executes.
These added instructions check if this class has already been loaded by calling the isAlreadyLoadedo method, which returns either true or false corresponding to the loaded state of this class.
The isAlreadyLoaded( method of the DRT can optionally take an argument which represents a unique identifier for this class (See ANNEXURE B5 and B6), for example the name of the class, or a class object representing this class, or a unique WO 2005/103924 PCT/AU2005/000578 number representing this class across all machines, to be used in the determination of the loaded status of this class. This way, the DRT can support the loading of multiple classes at the same time without becoming confused as to which of the multiple classes are already loaded and which are not, by using the unique identifier of each class to consult the correct record in the isAlreadyLoaded table.
The DRT can determine the loaded state of the class in a number of ways.
Preferably, it can ask each machine in turn if this class is loaded, and if any machine replies true, then return true, otherwise false. Alternatively, the DRT on the local machine can consult a shared record table (perhaps on a separate machine (eg machine or a coherent shared record table on the local machine, or a database) to determine if this class has been loaded or not.
If the DRT returns false, then this means that this class has not been loaded before on any machine in the distributed environment, and hence, this execution is to be considered the first and original. As a result, the DRT must update the "isAlreadyLoaded" record for this class in the shared record table to true, such that all subsequent invocations of isAlreadyLoaded on all other machines, and including the current machine, will now return true. Thus, if DRT.isAlreadyLoaded() returns false, the modified <clinit> method proceeds with the original code block, which now trails the inserted three instructions.
On the other hand, if the DRT returns true, then this means that this class has already been loaded in the distributed environment, as recorded in the shared record table of loaded classes. In such a case, the original code block is NOT to be executed, as it will overwrite already-initialised memory locations etc. Thus, when the DRT returns true, the inserted three instructions prevent execution of the original code, and return straight away to the application program.
An equivalent procedure for the <init> routines relating to objects is illustrated in Fig. 21 where steps 212 and 213 are equivalent to steps 202 and 203 of Fig. This results in the code of Annexure B3 being converted into the code of Annexure B4.
WO 2005/103924 PCT/AU20051000578 A similar modification as used for <clinit> is used for <init>. The application program's <init> block (or blocks, as there can be multiple unlike <clinit>) is or are detected as shown by step 212 and modified as shown by step 213 to behave coherently across the distributed environment.
In the example of Annexure B3 the application program's <init> instructions initialise a memory location (field) with the timestamp of the loading time. The application could use this, for example, to record when this object was created.
Clearly, in a distributed environment, where peer objects can load at different times, special treatment is necessary to make sure that the timestamp of the first-loaded peer object is not overwritten by later peer objects.
The disassembled instruction sequence after modification has taken place is set out in Annexure B4 and the modified/inserted instructions are highlighted in bold.
For the <init> modification, unlike the <clinit> modification, the modifying instructions are often required to be placed after the "invokespecial" instruction, instead of at the very beginning. The reasons for this are driven by the JAVA Virtual Machine specification. Other languages often have similar subtle designs nuances.
Given the fundamental concept of testing to see if initialization has already been carried out, and if not carrying it out, and if so, not carrying out any further initialization; there are several different ways in which this concept can be carried out.
In the first embodiment, a particular machine, say machine M2, loads the class or object on itself and then loads each of the other machines Ml, M3 Mn (either sequentially or simultaneously). In this arrangement, which may be termed "master/slave" each of machines Ml, M3, Mn loads what it is given by machine M2.
In a variation of this "master/slave" arrangement, machine M2 loads the <clinit> routine in unmodified form on machine M2 and then modifies the class by deleting the initialization routine in its entirety and loads the modified class on the WO 2005/103924 PCT/AU20051000578 other machines. Thus in this instance the modification is not a by-passing of the initialization routine but a deletion of it on all machines except one.
In a still further embodiment, each machine receives the initialization routine, but modifies it and loads the modified routine on that machine. This enables the modification carried out by each machine to be slightly different being optimized based upon its architecture and operating system, yet still coherent with all other similar modifications..
In a further arrangement, a particular machine, say M1, loads the class and all other machines M2, M3 Mn do a modification to delete the initialization routine and load the modified version.
In all instances, the supply can be branched (ie M2 supplies each of M1, M3, M4, etc directly) or cascaded or sequential (ie M2 applies M1 which then supplies M3 which then supplies M4, and so on).
In a still further arrangement, the initial machine, say M2, can carry out the initial loading and then generate a table which lists all the classes loaded by machine M2. This table is then sent to all other machines (either in branched or cascade fashion). Then if a machine, other than M2, needs to access a class listed in the table, it sends a request to M2 to provide the necessary information. Thus the information provided to machine Mn is, in general, different from the initial state loaded into machine M2.
Under the above circumstances it is necessary for each entry in the table to be accompanied by a counter which is incremented on each occasion that a class is loaded. Thus, when data is demanded, both the class contents and the count of the corresponding counter are transferred in response to the demand. This "on demand" mode increases the overhead of each computer but reduces the volume of traffic on the communications network which interconnects the computers.
WO 2005/103924 PCT/AU20051000578 In a still further arrangement, the machines M1 to Mn, can send all load requests to an additional machine X (of Fig. 15), which performs the modification via any of the afore mentioned methods, and returns the modified class to each of the machines M1 to Mn which then load the class locally. In this arrangement, machines M1 to Mn do not maintain a table of records for any class, and instead, they forward all load requests to machine X, which maintains the table of loaded classes, and returns a modified class to each machine dependant on whether or not it is the first time a given class is loaded on machines Ml to Mn. The modifications performed by machine X can include any of the modifications covered under the scope of the present invention.
Persons skilled in the computing arts will be aware of four techniques used in creating modifications in computer code. The first is to make the modification in the original (source) language. The second is to convert the original code (in say JAVA) into an intermediate representation (or intermediate language). Once this conversion takes place the modification is made and then the conversion is reversed. This gives the desired result of modified JAVA code.
The third possibility is to convert to machine code (either directly or via the abovementioned intermediate language). Then the machine code is modified before being loaded and executed. The fourth possibility is to convert the original code to an intermediate representation, which is thus modified and subsequently converted into machine code.
The present invention encompasses all four modification routes and also a combination of two, three or even all four, of such routes.
Turning now to Figs. 22-24, two laptop computers 101 and 102 are illustrated.
The computers 101 and 102 are not necessarily identical and indeed, one can be an IBM or IBM-clone and the other can be an APPLE computer. The computers 101 and 102 have two screens 105, 115 two keyboards 106, 116 but a single mouse 107. The two machines 101, 102 are interconnected by a means of a single coaxial cable or twisted pair cable 314.
WO 2005/103924 PCT/AU20051000578 Two simple application programs are downloaded onto each of the machines 101, 102, the programs being modified as they are being loaded as described above.
In this embodiment the first application is a simple calculator program and results in the image of a calculator 108 being displayed on the screen 105. The second program is a graphics program which displays four coloured blocks 109 which are of different colours and which move about at random within a rectangular box 310. Again, after loading, the box 310 is displayed on the screen 105. Each application operates independently so that the blocks 109 are in random motion on the screen 105 whilst numerals within the calculator 108 can be selected (with the mouse 107) together with a mathematical operator (such as addition or multiplication) so that the calculator 108 displays the result.
The mouse 107 can be used to "grab" the box 310 and move same to the right across the screen 105 and onto the screen 115 so as to arrive at the situation illustrated in Fig. 23. In this arrangement, the calculator application is being conducted on machine 101 whilst the graphics application resulting in display of box 310 is being conducted on machine 102.
However, as illustrated in Fig. 24, it is possible by means of the mouse 107 to drag the calculator 108 to the right as seen in Fig. 23 so as to have a part of the calculator 108 displayed by each of the screens 105, 115. Similarly, the box 310 can be dragged by means of the mouse 107 to the left as seen in Fig. 23 so that the box 310 is partially displayed by each of the screens 105, 115 as indicated Fig. 24. In this configuration, part of the calculator operation is being performed on machine 101 and part on machine 102 whilst part of the graphics application is being carried out the machine 101 and the remainder is carried out on machine 102.
The foregoing describes only some embodiments of the present invention and modifications, obvious to those skilled in the art, can be made thereto without departing from the scope of the present invention. For example, reference to JAVA includes both the JAVA language and also JAVA platform and architecture.
Those skilled in the programming arts will be aware that when additional code or instructions is/are inserted into an existing code or instruction set to modify same, the existing code or instruction set may well require further modification (eg by WO 2005/103924 PCT/AU2005/000578 re-numbering of sequential instructions) so that offsets, branching, attributes, mark up and the like are catered for.
Similarly, in the JAVA language memory locations include, for example, both fields and array types. The above description deals with fields and the changes required for array types are essentially the same mutatis mutandis. Also the present invention is equally applicable to similar programming languages (including procedural, declarative and object orientated) to JAVA including Micrsoft.NET platform and architecture (Visual Basic, Visual and FORTRAN, C/C+, COBOL, BASIC etc.
The abovementioned embodiment in which the code of the JAVA initialisation routine is modified, is based upon the assumption that either the run time system (say, JAVA HOTSPOT VIRTUAL MACHINE written in C and JAVA) or the operating system (LINUX written in C and Assembler, for example) of each machine M1...Mn will call the JAVA initialisation routine. It is possible to leave the JAVA initialisation routine unamended and instead amend the LINUX or HOTSPOT routine which calls the JAVA initialisation routine, so that if the object or class is already loaded, then the JAVA initialisation routine is not called. In order to embrace such an arrangement the term "initialisation routine" is to be understood to include within its scope both the JAVA initialisation routine and the "combination" of the JAVA initialisation routine and the LINUX or HOTSPOT code fragments which call or initiates the JAVA initialisation routine.
The terms object and class used herein are derived from the JAVA environment and are intended to embrace similar terms derived from different environments such as dynamically linked libraries (DLL), or object code packages, or function unit or memory locations.
The term "comprising" (and its grammatical variations) as used herein is used in the inclusive sense of "having" or "including" and not in the exclusive sense of "consisting only of'.
WO 2005/103924 PCT/AU20051000578 Copyright Notice This patent specification contains material which is subject to copyright protection. The copyright owner (which is the applicant) has no objection to the reproduction of this patent specification or related materials from publicly available associated Patent Office files for the purposes of review, but otherwise reserves all copyright whatsoever. In particular, the various instructions are not to be entered into a computer without the specific written approval of the copyright owner.

Claims (24)

  1. 2. The system as claimed in claim I wherein each said computer includes a distributed run time means with the distributed run time means of each said computer able to communicate with all other computers whereby if a portion of said application program(s) running on one of said computers creates an object in that computer then the created object is propagated by the distributed run time means of said one computer to all the other computers.
  2. 3. The system as claimed in claim 2 wherein each said application program is modified before, during, or after loading by inserting an initialization routine to modify each instance at. which said application program creates an object, said initialization routine propagating every object newly created by one computer to all said other computers.
  3. 4. The system as claimed in claim 3 wherein said inserted initialization routine modifies a pre-existing initialization routine to enable the pre-exisitng initialization routine to execute on creation of the first of said like plurality of objects, and to disable the pre-existing initialization routine on creation of all subsequent ones of said like plurality of objects. The system as claimed in claim 4 wherein the application program is modified in accordance with a procedure selected from the group of procedures consisting of re-compilation at loading, pre-compilation prior to loading, 26 5027C-WO AMENDED SHEET IPEAnAU PCT/AU2005/000578 Received 16 September 2005 compilation prior to loading, just-in-time compilation, and re-compilation after loading and before execution of the relevant portion of application program,
  4. 6. The system as claimed in claim 2 wherein said modified application program is transferred to all said computers in accordance with a procedure selected from the group consisting of master/slave transfer, branched transfer and cascaded transfer.
  5. 7. A plurality of computers interconnected via a communications link and simultaneously operating at least one application program each written to operation on only a single computer wherein each said computer substantially simultaneously executes a different portion of said application program(s), each said computer in operating its application program portion creates objects only in local memory physically located in each said computer, the contents of the local memory utilized by each said computer are fundamentally similar but not, at each instant, identical, and every one of said computers has distribution update means to distribute to all other said computers objects created by said one computer.
  6. 8. The plurality of computers as claimed in claim 7 wherein the local memory capacity allocated to the or each said application program is substantially identical and the total memory capacity available to the or each said application program is said allocated memory capacity.
  7. 9. The plurality of computers as claimed in claim 7 wherein all said distribution update means communicate via said communications link at a data transfer rate which is substantially less than the local memory read rate. The plurality of computers as claimed in claim 7 wherein at least some of said computers are manufactured by different manufacturers and/or have different operating systems.
  8. 11. A method of running simultaneously on a plurality of computers at least one application program each written to operate on only a single computer, said computers being interconnected by means of a communications network, said 27 5027WO AMENDED SHEET IPEA/AU PCTiAU2005/000578 Received 16 September 2005 method comprising the steps of: executing different portions of said application program(s) on different ones of said computers and for each said portion creating a like plurality of substantially identical objects each in the corresponding computer and each having a substantially identical name, and (ii) creating the initial contents of each of said identically named objects substantially the same.
  9. 12. The method as claimed in claim 11 comprising the further step of, (iii) if a portion of said application program running on one of said computers creates an object in that computer, then the created object is propagated to all of the other computers via said communications network.
  10. 13. The method as claimed in claim 12 including the further step of: (iv) modifying said application program before, during or after loading by inserting an initialization routine to modify each instance at which said application program creates an object, said initialization routine propagating every object created by one computer to all said other computers. 14, The method as claimed in claim 13 including the further step of: modifying said application program utilizing a procedure selected from the group of procedures consisting of re-compilation at loading, pre- compilation prior to loading, compilation prior to loading, just-in-time compilation, and re-compilation after loading and before execution of the relevant portion of application program. The method as claimed in claim 13 including the further step of: (vi) transferring the modified application program to all said computers utilizing a procedure selected from the group consisting of master/slave transfer, branched transfer and cascaded transfer.
  11. 16. A method of compiling or modifying an application program written to operate on only a single computer to have different portions thereof to execute substantially simultaneously on different ones of a plurality of computers interconnected via a communications link, said method comprising the steps 28 5027C-WO AMENDED SHEE IPfEAAU PCT/AU2005/000578 Received 16 September 2005 of: detecting instructions which create objects utilizing one of said computers, (ii) activating an initialization routine following each said detected object creation instruction, said initialization routine forwarding each created object to the remainder of said computers.
  12. 17. The method as claimed in claim 16 and carried out prior to loading the application program onto each said computer, or during loading of the application program onto each said computer, or after loading of the application program onto each said computer and before execution of the relevant portion of the application program.
  13. 18. In a multiple thread processing computer operation in which individual threads of a single application program written to operate on only a single computer are simultaneously being processed each on a different corresponding one of a plurality of computers interconnected via a communications link, the improvement comprising communicating objects created in local memory physically associated with the computer processing each thread to the local memory of each other said computer via said communications link.
  14. 19. The improvement as claimed in claim 18 wherein objects created in the memory associated with one said thread are communicated by the computer of said one thread to all other said computers. The improvement as claimed in claim 18 wherein objects created the memory associated with one said thread are transmitted to the computer associated with another said thread and are transmitted thereby to all said other computers.
  15. 21. A method of ensuring consistent initialization of an application program written to operate on only a single computer but different portions of which are to be executed simultaneously each on a different one of a plurality of computers interconnected via a communications network, said method comprising the steps of: scrutinizing said application program at, or prior to, or after loading to 29 5027C-WO IPPAAU PCT/AU2005/000578 Received 16 September 2005 detect each program step defining an initialization routine, and (ii) modifying said initialization routine to ensure consistent operation of all said computers.
  16. 22. The method as claimed in claim 21 wherein said initialization routine is modified to execute once only on the creation of a first object by any one of said computers and is modified to be disabled on the creation of each subsequent peer copy of said object by the remainder of said computers.
  17. 23. The method claimed in claim 21 or 22 wherein step (ii) comprises the steps of: (iii) loading and executing said initialization routine on one of said computers, (iv) modifying said initialization routine by said one computer, and transferring said modified initialization routine to each of the remaining computers.
  18. 24. The method as claimed in claim 23 wherein said modified initialization routine is supplied by said one computer direct to each of said remaining computers. The method as claimed in claim 23 wherein said modified initialization routine is supplied in cascade fashion from said one computer sequentially to each of said remaining computers.
  19. 26. The method claimed in claim 21 or 22 wherein step (ii) comprises the steps of: (vi) loading and modifying said initialization routine on one of said computers, (vii)said one computer sending said unmodified initialization routine to each of the remaining computers, and (viii) each of said remaining computers modifying said initialization routine after receipt of same.
  20. 27. The method claimed in claim 26 wherein said unmodified initialization routine is supplied by said one computer directly to each of said remaining computers. 5027C-WO MENDED SHEET IPA/AU PCT/AU2005/000578 Received 16 September 2005
  21. 28. The method claimed in claim 26 wherein said unmodified initialization routine is supplied in cascade fashion from said one computer sequentially to each of said remaining computers.
  22. 29. A computer program product comprising a set of program instructions stored in a storage medium and operable to permit a plurality of computers to carry out the method as claimed in claim 11 or or 1 r 18 or 21. A plurality of computers interconnected via a communication network and operable to ensure consistent initialization of an application program written to operate on only a single computer but running simultaneously of said computers, said computers being programmed to carry out the method as claimed in claim 11 or 16 or 18 or 21 or being loaded with the computer program product as claimed in claim 29. 31 5027C-WO MENDED SHEET IPEA/AU r \O c, 31. A multiple computer system substantially as herein described with reference to Fig. 5 or Fig. 8 or Fig. 15 or Figs. 22-24 and with reference to any one of Figs. 16-21 of the drawings. 00 C 32. A plurality of computers substantially as herein described with reference to Fig. or Fig. 8 or Fig. 15 or Figs. 22-24 and with reference to any one of 00 Figs. 16-21 of the drawings. 00 \33. A method of running simultaneously on a plurality of computers at least one application program each written to operate on only a single computer, said Smethod being substantially as herein described with reference to any one of SFigs. 16-21 of the drawings.
  23. 34. A method of compiling or modifying an application program written to operate on only a single computer, said method being substantially as herein described with reference to any one of Figs. 16-21 of the drawings. In a multiple thread processing computer operation in which individual threads of a single application program written to operate on only a single computer are simultaneously being processed each on a different corresponding one of a plurality of computers interconnected via a communications link, the improvement comprising communicating objects substantially as herein described with reference to any one of Figs. 16-21 of the drawings.
  24. 36. A method of ensuring consistent initialization of an application program written to operate on only a single computer but different portions of which are to be executed simultaneously each on a different one of a plurality of computers interconnected via a communications network, said method being substantially as herein described with reference to any one of Figs. 16-21 of the drawings. Dated this 8 h day of August 2006 WARATEK PTY LTD By FRASER OLD SOHN Patent Attorneys for the Applicant 5027C-AU
AU2005236085A 2004-04-22 2005-04-22 Modified computer architecture with initialization of objects Ceased AU2005236085B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2005236085A AU2005236085B2 (en) 2004-04-22 2005-04-22 Modified computer architecture with initialization of objects

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
AU2004902146A AU2004902146A0 (en) 2004-04-22 Modified Computer Architecture
AU2004902146 2004-04-22
PCT/AU2005/000578 WO2005103924A1 (en) 2004-04-22 2005-04-22 Modified computer architecture with initialization of objects
AU2005236085A AU2005236085B2 (en) 2004-04-22 2005-04-22 Modified computer architecture with initialization of objects

Publications (2)

Publication Number Publication Date
AU2005236085A1 AU2005236085A1 (en) 2005-11-03
AU2005236085B2 true AU2005236085B2 (en) 2007-02-15

Family

ID=36950987

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2005236085A Ceased AU2005236085B2 (en) 2004-04-22 2005-04-22 Modified computer architecture with initialization of objects

Country Status (1)

Country Link
AU (1) AU2005236085B2 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5488723A (en) * 1992-05-25 1996-01-30 Cegelec Software system having replicated objects and using dynamic messaging, in particular for a monitoring/control installation of redundant architecture
WO2002044835A2 (en) * 2000-11-28 2002-06-06 Gingerich Gregory L A method and system for software and hardware multiplicity
US6560717B1 (en) * 1999-12-10 2003-05-06 Art Technology Group, Inc. Method and system for load balancing and management
US6625751B1 (en) * 1999-08-11 2003-09-23 Sun Microsystems, Inc. Software fault tolerant computer system
WO2003083614A2 (en) * 2002-03-25 2003-10-09 Eternal Systems, Inc. Transparent consistent active replication of multithreaded application programs
US20040073828A1 (en) * 2002-08-30 2004-04-15 Vladimir Bronstein Transparent variable state mirroring

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5488723A (en) * 1992-05-25 1996-01-30 Cegelec Software system having replicated objects and using dynamic messaging, in particular for a monitoring/control installation of redundant architecture
US6625751B1 (en) * 1999-08-11 2003-09-23 Sun Microsystems, Inc. Software fault tolerant computer system
US6560717B1 (en) * 1999-12-10 2003-05-06 Art Technology Group, Inc. Method and system for load balancing and management
WO2002044835A2 (en) * 2000-11-28 2002-06-06 Gingerich Gregory L A method and system for software and hardware multiplicity
WO2003083614A2 (en) * 2002-03-25 2003-10-09 Eternal Systems, Inc. Transparent consistent active replication of multithreaded application programs
US20040073828A1 (en) * 2002-08-30 2004-04-15 Vladimir Bronstein Transparent variable state mirroring

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
H. E. Bal et al, A Distributed Implementation of the Shared Data-Object Model, Proc. USENIX Workshop on Experiences with Distributed and Multiprocessor Systems, pp. 1-19, October 1989 *
H. E. Bal et al, Object Distribution in Orca Using Compile-Time and Run-Time Techniques, Proc. Conference on Object-Oriented Programming Systems, Languages and Applications, pp. 162-177, September 1993 *
H. E. Bal et al, Orca: A Language for Parallel Programming of Distributed Systems, IEEE Transactions on Software Engineering, 18(3), pp. 190-205, March 1992 *
H. E. Bal et al, Replication Techniques for Speeding up Parallel Applications on Distributed Systems, Concurrency Practice & Experience, 4(5), pp. 337-55, August 1992 *
H.E. Bal et al, Experience with Distributed Programming in Orca, Proc. IEEE CS International Conference on Computer Languages, pp. 79-89, March 1990 *
T.C. Bressoud, TFT: A Software System for Application-Transparent Fault Tolerance, Proc. 28th Annual International Symposium on Fault-Tolerant computing, p. 128-37, 1998 *

Also Published As

Publication number Publication date
AU2005236085A1 (en) 2005-11-03

Similar Documents

Publication Publication Date Title
US7707179B2 (en) Multiple computer architecture with synchronization
US20050257219A1 (en) Multiple computer architecture with replicated memory fields
US20050262513A1 (en) Modified computer architecture with initialization of objects
EP1763774B1 (en) Multiple computer architecture with replicated memory fields
US7844665B2 (en) Modified computer architecture having coordinated deletion of corresponding replicated memory locations among plural computers
US20060095483A1 (en) Modified computer architecture with finalization of objects
US8028299B2 (en) Computer architecture and method of operation for multi-computer distributed processing with finalization of objects
US20050240737A1 (en) Modified computer architecture
US20060150195A1 (en) System and method for interprocess communication
US20080072238A1 (en) Object synchronization in shared object space
KR100301274B1 (en) Object-oriented method maintenance mechanism that does not require cessation of the computer system or its programs
US7542981B2 (en) Methods and apparatus for parallel execution of a process
EP2652634A1 (en) Distributed computing architecture
AU2005236085B2 (en) Modified computer architecture with initialization of objects
AU2005236089B2 (en) Multiple computer architecture with replicated memory fields
AU2005236086B2 (en) Multiple computer architecture with synchronization
AU2005236087B2 (en) Modified computer architecture with coordinated objects
AU2005236088A1 (en) Modified computer architecture with finalization of objects

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)
MK14 Patent ceased section 143(a) (annual fees not paid) or expired