WO1999030230A1 - Naturally parallel computing system and method - Google Patents
Naturally parallel computing system and method Download PDFInfo
- Publication number
- WO1999030230A1 WO1999030230A1 PCT/US1998/026436 US9826436W WO9930230A1 WO 1999030230 A1 WO1999030230 A1 WO 1999030230A1 US 9826436 W US9826436 W US 9826436W WO 9930230 A1 WO9930230 A1 WO 9930230A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- node
- execution
- nodes
- execution graph
- queue
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/31—Programming languages or programming paradigms
- G06F8/311—Functional or applicative languages; Rewrite languages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/448—Execution paradigms, e.g. implementations of programming paradigms
- G06F9/4494—Execution paradigms, e.g. implementations of programming paradigms data driven
Definitions
- the present invention relates to a method and system for executing naturally parallel programs on one or more processors.
- Parallel computation can be defined as the simultaneous execution of multiple program instructions. Simultaneous execution is accomplished through the use of one or multiple processors. Since single processor computers can only execute one program's instruction at a time, to simulate simultaneous execution the operating system of the computer must take turns executing instructions from each active program. This process is called multi-tasking. When a single program is broken into multiple components that can be simultaneously executed, the process is called multi-threading, multi-tasking, or preemption. A program with multiple threads requires a programmer to use special utilities that direct the operating system how and when to execute these components.
- Massively parallel computing involves computers with many processors.
- the programs developed for these computers are generally customized for the explicit computer and tailored to the communication protocols of the computers. Programs written for these massively parallel computers do not port well to computers with one or few processors.
- the system and method of this invention achieves parallel computation over many programs without the use of multi-tasking or multi-threading.
- programs written to run on the system disclosed by this invention axe highly portable.
- This invention is purely an execution method for parallel computation and is independent of system data and how it is used.
- program functionality depends on the existence of many operation types for operating on data.
- This computing system allows programs to be developed where functionality is dependent on the shape of the program as well as on the operations used.
- This invention is a system for executing naturally parallel programs on at least one processor.
- Naturally parallel programs are programs that may be divided into nodes, each node having a finite set of instructions and data.
- the system comprises of a loading means for processing source code and network messages to create an execution graph that resides in a memory means.
- the execution graph comprises of at least one node, each node having an associated address.
- a loading means processes each node to create an execution graph.
- the execution graph is an arbitrary network and may be a hierarchical representation of the ordering which the nodes are to be executed. All nodes at the same level in a vertical grouping are considered parallel events and may be executed in any order.
- a queuing means takes nodes that are ready for execution from the execution graph and places the addresses of these nodes on a queue.
- One or more execution means take addresses off the queue and execute the
- FIG. 1 is a block diagram illustrating the components of the invention.
- FIG. 2 is a diagram illustrating the execution graph.
- FIG. 3 is a block diagram illustrating the executable queue.
- FIG. 4 is a block diagram illustrating the loading means.
- FIG. 5 is a block diagram illustrating the communication to external applications.
- FIG. 6 is a block diagram illustrating same queue execution.
- FIG. 7 is a block diagram illustrating round robin execution.
- FIG. 8 is a block diagram illustrating fixed cell execution.
- this invention is a system for executing naturally parallel programs on at least one processor.
- Naturally parallel programs are programs that may be divided into nodes, each node being a finite set of instructions and data.
- the system 100 comprises of a loading means 130 that processes source code files 120 and network messages 125 into nodes, as for example N1-N5.
- the nodes are placed in an execution graph 140.
- the execution graph resides in memory means 105 that contains an execution space 135.
- the execution space 135 may be virtual memory and may contain utilities 141 to add, delete or modify nodes, an execution graph 140, and a list of free nodes 142.
- the execution graph 140 contains at least one node, each node having a finite set of mstructions and data and wherein each node has an associated address 165.
- the execution graph 140 is any arbitrary arrangement of linked nodes that creates a network. Note that the arbitrary arrangement of linked nodes in an execution graph 140 only needs to be meaningful.
- the arbitrary network may include, mter alia, a hierarchical representation of the ordering which the nodes are to be executed, a neural net, cells in a numerical method of finite element analysis, or any arbitrary structure that achieves the programmer's goals.
- a queuing means 150 queues at least one address associated with one of the at least one node responsive to the execution graph 140 thereby creating a queue 160 of addresses 165 to nodes which may be executed in parallel.
- One or more execution means 170 reads addresses from the queue 160, de-references the address 160 and executes the node in the memory means 105 stored at that address.
- a network is a collection of nodes or objects that are linked together. Nodes and objects are used interchangeably in this document.
- a node is a finite set of instructions and data. Nodes may be executable entities, information, or any combination of the two. Nodes may be Windows, buttons, sliders, mathematical expressions, etc.
- When a node executes it performs an action, such as adding two numbers together or flashing a message.
- an action such as adding two numbers together or flashing a message.
- a node finishes executing it causes all nodes linked to it to be executed. Neither data nor messages need to be passed to the linked nodes for them to execute, and unlike conventional object oriented programming, explicit method calls are not necessary to execute an object.
- execution graph 140 provides a high- level execution framework.
- the execution graph 140 is a network of nodes representing an arbitrary network. Note that a set of nodes may not have to be executed in any order.
- the execution graph provides the medium of node execution and the medium of node organization. Nodes are further defined under the queuing means, below, but as stated earlier, a node is a finite set of instructions and data.
- the execution graph 140 can be linear, hierarchical, or any conceivable shape that achieves the necessary computational results.
- the use of an execution graph makes a program's functionality dependent on the type of nodes being executed as well as the shape of the graph executing the nodes.
- an arbitrary network may be a hierarchical representation of the ordering which the nodes are to be executed.
- the execution graph 140 may also represent a neural net, or simulate a set of complex interacting components such as mechanical gears.
- the nodes in an execution graph 140 may also represent cells in a numerical method of finite element analysis. In other words, the execution graph may be any structure that achieves the programmer's goals.
- an arbitrary network may be a hierarchical representation of the ordering which the nodes are to be executed. If the execution graph 140
- executable nodes are linked together to form an execution graph 200 which is stored in memory means 105.
- the queuing means 150 places the addresses of nodes ready for execution on the queue 160.
- the address of node 201 is first placed on the queue for execution by putting its address into the execution queue 160.
- the queuing process could be the press of a button, the arrival of a message, the opening of a window, etc.
- nodes 203, 205, and 207 are queued for execution.
- node 209 and nodes 211 and 213 execute.
- Node 216 is called a tie node because it will execute after the execution of nodes 209 and node 215.
- tie node 216 executes node 217 is executed and the program represented by execution graph 200 is complete.
- a method for controlling the dependent execution of a node is by using a special object type called a 'tie'.
- tie is linked to the nodes that must complete execution. Each linked node is
- tie node represented by a bit within the tie node. After a linked node executes it queues a message indicating its predefined bit number and the address of the tie. This in turn makes the tie node get queued for execution. When the tie executes, it sets the corresponding bit of the node that triggered it and then test to see if all bits have been set. If so, the dependent node is queued for execution and the bits are cleared.
- Nodes in any portion of the graph can be triggered for execution. There are no “do-while”, “for”, “go-to” or “switch” statements. As explained in the section of the queuing means, the flow of execution depends on the connectivity between nodes and the properties of the node being executed.
- a node's executable logic consists of a finite sequence of binary mstructions.
- the execution means 170 calls the address of the first instruction
- the sequence executes up to the last instruction, which must be a
- the sequence of binary instructions may contain jump instructions that advance the execution mean's 170 instruction pointer to an instruction within the sequence, but there can never be a jump backward instruction in the sequence. The inclusion of a jump backward could cause execution of the sequence to be endless and control would never be returned to the execution mean's main program allowing the execution mean's to fetch and execute another node.
- the assembler used by the execution means 170 to produce a node's executable logic does not allow backward jump to occur.
- the system's programming language is used to describe the properties of an
- the language uses keywords to specify object types and datums that are to receive values.
- the first statement defines an object of type 'form' (equivalent to a Window) and the second statement places an object of type 'button' on the form: Form: Example rgn:10 10 500 400 brushcolorl : 0 0 255 brushcolor2 : 0 255 0 style :sys;
- the operator "»" tells the loading means 130 to link the button to the form (the button becomes a child of the form).
- the first word is the name of predefined class type in the system (also nodes) and the keywords following it are the names of datums or member methods that process the arguments following the keywords.
- the keywords can be defined in any order.
- nodes Because nodes have a well-defined location in the execution graph they can be easily modified, replaced or deleted while the application they are a part of continues to execute. This capability is known as surgical modification. It allows the
- Surgical modification is important for doing remote computing.
- a user may step into the execution graph residing on a different computer (provided they have access) test logic using an interactive debugger and make corrections as necessary. Once corrections have been made, these fixes can be propagated to all computers in a network without inconveniencing the user.
- a network message 125 is sent to the loading means 130 containing the desired graph.
- the message includes a query identifying the node to be effected, the operation to occur (add, delete or modify) and a node or a piece of an execution graph. If the node being modified is pending execution, the system will wait for it to complete before making the modification.
- Surgical modification is a feature allowing a user to fix problems with an application without having to ship whole executables or dynamic link libraries. To ensure security, any time a surgical modification is performed, the system registers the identity of the system that caused the change to occur. 4. Queuing Means
- the queuing means 150 queues addresses 165 associated with the nodes on the execution graph 140 that are ready for execution. As shown in FIG. 3, in the preferred embodiment the queue 160 may be a FIFO (First In First Out) queue and a link object.
- FIFO First In First Out
- the FIFO queue may be an array of 65536 addresses.
- the address of the node is placed into the bottom of the queue and an "end" index pointer 368 is incremented.
- the execution means 170 reads an address of a node from the queue 160.
- the "start" index-pointer 365 is then incremented (incrementing an index pointer past Oxffff returns the value to zero).
- the address of the node is de-referenced and a call is made to transfer execution to the first executable instruction in the node 370.
- the object's logic is executed at 375. If the node is not executable it should not have been queued, but if it had been, the first instruction should be a ⁇ return>.
- the last portion of the logic queues the address of the node's link at 380. This proceeds until there are no more instructions in the queue in which case the system stops execution.
- a special node is queued when the system starts up. This special node
- the system determines which objects they apply to (buttons, sliders, fields,
- a link is an executable node. When a link executes it can either queue the address of the nodes on the queue or execute the node directly. The later method reduces overhead.
- nodes are persistent within the system, they are immediately available for execution at all times. Objects can be executed as quickly as they can be swapped into memory. The overhead associated with creating a task, allocating a process id, allocating system resources, opening an executable file, mapping it into virtual memory and performing dynamic run time linking is gone.
- the following code is an example of code that may be used to read an executable graph from a file, put it in memory means 105, and to place the nodes ready for execution on the queue.
- the numbers in brackets "[]" to the left of the code are reference numerals that are referred to in the explanation of the code, below.
- Memory is allocated at step [1010].
- the executable graph is loaded into memory at step [1015].
- the addresses of the nodes ready for execution are queued at step [1017].
- the address for the beginning of memory is moved to the register ebp at step [1019].
- Step [1020] starts the beginning of a loop named read_queue.
- the index for the start of the queue is put into register ebx at step [1021].
- the address of the next node on the queue is retrieved at step [1023].
- the address of the node to be executed is computed by adding the address of the node to the address of the beginning of memory at step [1025]. Note all addresses of the nodes may be dereferenced by adding the address of the node on the queue to the beginning of memory.
- the index to the queue is incremented at step [1027].
- the node is executed at step [1029].
- the read_queue loop is repeated at step [1031].
- Nodes are objects instantiated from classes that are themselves objects.
- Objects can be functions, external applications, controls, forms, mathematical expressions, etc.
- Data can flow into a node in the form of buffered messages or arguments passed to a function.
- Data can also be referenced as operands in mathematical expressions. The way data is handled by a node or propagates through the system is independent of how the logic executes.
- executable nodes There are two types of nodes, executable nodes that contain instructions and link nodes that contain links to executable nodes.
- executable nodes may have the following structure:
- header defining the node's length and other properties
- queuefqend] address_of_link; If a node is linked to another it must have an associated link.
- the structure of a link is as follows:
- the code sequence below is the link instructions.
- the address of the link node is stored in register "ebx”.
- the offset to the list of addresses (LINK_ADDR) is added to the register at step [2010].
- An address from the list is put into register "ecx” and if zero then the end of the list has been reached and control returns to the main program, step [2016]. If it is not zero, the node is executed at step [2020].
- the memory means 105 may be system executable memory that can reside on
- the memory means 105 may be virtual memory and manages objects or nodes up to 2-gigabytes and a flat addressable space of 2- gigabytes per system.
- the nodes in an execution graph can be linked to nodes in any other execution graph creating an executable virtual space equal in size to all the space managed by all system enabled computers that are network connected. This creates a fine grained global execution space in which any node can discreetly trigger any other node.
- the execution graph provides the method of organization as well as execution. This organizational approach is referred to as an object centric environment vs. a file centric environment.
- the object representing the document is queued and executed by selecting the object.
- This invention is a complete computing system. It unifies elements of computation, communication, and data management. It executes its own logic, has its own memory manager, has its own scaleable quality of service protocol and uses a
- the system does not use a classic assembly or compilation process to load objects.
- Objects arrive into the system via network messages 125 or source code stored in files 120.
- the definition of an object is descriptive, not binary.
- An object's description is processed by the loading means 130 and a binary object with data and instruction is produced and linked into the execution graph 140. Objects combine data and binary executable instructions in the same entity.
- An object's binary instructions are produced by a compilation process that is part of the loading means
- the compilation process is table driven making use of a "generic" assembly language.
- the assembly language is defined using an execution graph 140 that defines its syntax and semantic rules. Additionally, the table includes the processor's native hex instruction codes that are produced as a consequence of the compilation process. The native hex codes are processor dependent and can be updated for any type of processor. This makes the execution graph 140 completely portable and platform independent.
- register names and operations have been abstracted. Doing this allows one to develop efficient code without having to worry about portability issues.
- the system makes special use of Pentium registers edi, esi and ebp.
- Other registers including eax, ebx, ecx are general purpose.
- the use of a generic assembler insulates the programmer from these special use registers.
- General use registers are labeled
- the generic assembler emulates the additional registers using cache memory.
- the goal of the generic assembler is to allow a developer to write efficient low level code that is portable across incompatible processors.
- Viruses occur when illegal sequences of code invade the system.
- the code sequences can be designed to destroy or modify information or simply hang the system.
- Objects being loaded into the system do not have binary executable logic associated with them.
- the executable portion of the object is generated at the time of load. This tactic makes it difficult to introduce illegal code sequences into the system.
- the system responds by marking the object as non-executable.
- the system is constantly verifying the integrity of objects in pages of memory. If a page of memory or an object is corrupted the memory manager flags the page or object so that it will not be used.
- nodes Before a node can be executed it must be loaded into the execution graph by the loading means 130. As shown in FIG. 4, nodes can be loaded from source files 120 or as messages received over the network 125.
- the first step in loading nodes is to determine where they will reside in the execution graph.
- a query 422 precedes a node definition. The query references an existing node within the network.
- an editing operation must be specified. There are three editing operations: add, remove and modify. If no operation is specified the system assumes the node is being added.
- the loading means 130 parses 424 the node to determine its type, size and instantiation values.
- the loading means 130 allocates memory 426 in the memory means 105 for the node,
- the system When the system starts up, it allocates memory in the memory means 105 for its page frame buffer, opens the file containing the execution graph and loads the graph into memory means 105. Next all external utilities and objects are registered and linked into the system. In other words, the run time addresses of all external executables are stored in the execution graph 140 as another node.
- This technique allows a developer to write executable logic in a language other than the system language.
- the addresses of external applications 540 and 550 are added to the execution graph 140 in the execution space 135.
- the run time addresses are stored in the execution graph as another node. External applications may be Windows DCOM and Unix CORBA.
- the queuing means 150 then queues the addresses of all nodes that are to be executed when the system starts. Finally control branches to the main program of the execution means 170.
- This invention supports more than one execution means 170. Therefore, all nodes whose addresses are on the queue may be executed in parallel because the nodes are independent of the number of processors available to execute them or the number of computers over which they can be partitioned and executed on.
- the queue provides a metric for the instantaneous computing load required of any computer at any instant in time.
- the number of addresses in the queue multiplied by the execution time for the object represents an execution backlog.
- this metric provides the information on how to and when to partition a program and distribute its execution to achieve optimal load balancing.
- the metric also allows the system to determine when a computer has reached a critial backlog.
- the same queue execution method is most suitable for machines with no more than four execution means. The reason for this is because each execution means must share memory with other processors and each processor takes turns retrieving a node address from the instruction queue. As shown in FIG. 6, the same queue execution method is implemented by one queue being accessed by multiple execution means
- Initially queue 510 contains nodes N1-N3.
- Processor 515 executes node Nl and when finished node N4 is placed at the end of queue 510.
- Processor 520 executes node N2 and when finished places node N5 at the end of queue 510. All nodes in the queue 510 are considered parallel events and can be executed in any order. This ensures that nodes already in memory can be executed while other nodes are being swapped into memory. This allows the implementation of an optimal page swapping algorithm.
- Round Robin is most suitable for systems with more than four execution means 170. This scheme also assumes that all execution means have access to all memory.
- the organization of round robin is a matrix of processors such as 710, 720, 730, 740 and 750. Each cell in the matrix contains a processor and an execution queue.
- matrix 710 contains processor 711 and queue 712.
- the processor 711 reads node addresses from its own queue 712 and executes them but queues addresses into adjacent queues. For example any nodes to be placed on the queue by processor 711 will be placed in the queue of matrix 740. This mechanism provides a natural means of achieving load balancing across the execution space.
- matrices are 810, 820, 830, 840, 850,
- Each processor reads from, executes and queues addresses to its own queue.
- the execution space is partitioned across the processor space.
- this system allows direct communication between nodes within its execution graph.
- the execution graph in the execution space is a collection of linked nodes, which are objects. Objects can be controls, mathematical expressions, functions and external applications.
- the system communicates with other system installations by sending messages to nodes. This allows any application running on a system to communicate directly with nodes within another system's execution space. For example, if a form has two-hundred fields on it that display results from two-hundred remote computations, each field would be linked to a discrete node within a discrete body of logic on the remote computer(s) and vice
- This technique of linking nodes between applications is also applied when linking external applications and objects.
- the system does so by sending messages to the node that represents the external object.
- All external applications used by the system must be represented by a node in the system.
- the node identifies the name of the object and its location in the
- Refr's are used in class definitions to create conceptual objects whose data can be distributed across a network or they can exist as standalone objects that tie together portions of the execution graph 140.
- Refrs are used in a program's logic to reference objects and datums. Datums are fields within an object, arguments passed to a function or stand alone variables. Refrs act like the glue within the execution graph 140 allowing whole groups of objects to be triggered when the value they reference changes. When a portion of the execution graph 140 is distributed, the partitioning occurs at refr nodes. When the graph portion has been re-distributed, references are automatically updated with the IP addresses of the remote computer and the virtual address of the corresponding remote refr. 9. External Applications
- This system allows one to register external applications.
- An enabled application makes a call to a register utility for permanent registration or to a logon utility for temporary registration. These utilities are part of a link library that must be included during the development of the application.
- buffers When an application registers itself with the system, two shared memory buffers are created. One buffer is used for sending messages and the other for receiving them. Buffers appear as data structures within C programs and objects within C++ programs. Applications use a collection of utilities or methods to connect, request, receive and send messages. The preferred embodiment of this system also uses these buffers.
- the buffers are designed to allow the application and the system to simultaneously read and write at the same time without the use of semaphores. There is no need to copy a message from one address space to another as is classically done in intra-process communications. This technique allows out-of-process calls to execute almost as fast as in-process calls.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Devices For Executing Special Programs (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP98962086A EP1058878A1 (en) | 1997-12-12 | 1998-12-11 | Naturally parallel computing system and method |
AU17248/99A AU1724899A (en) | 1997-12-12 | 1998-12-11 | Naturally parallel computing system and method |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US6942897P | 1997-12-12 | 1997-12-12 | |
US6943397P | 1997-12-12 | 1997-12-12 | |
US60/069,428 | 1997-12-12 | ||
US60/069,433 | 1997-12-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1999030230A1 true WO1999030230A1 (en) | 1999-06-17 |
Family
ID=26750051
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1998/026436 WO1999030230A1 (en) | 1997-12-12 | 1998-12-11 | Naturally parallel computing system and method |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP1058878A1 (en) |
AU (1) | AU1724899A (en) |
WO (1) | WO1999030230A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1152331A2 (en) * | 2000-03-16 | 2001-11-07 | Square Co., Ltd. | Parallel task processing system and method |
GB2425868B (en) * | 2004-10-18 | 2007-07-04 | Manthatron Ip Ltd | Logic-based Computing Device and Method |
US7822592B2 (en) * | 2004-10-18 | 2010-10-26 | Manthatron-Ip Limited | Acting on a subject system |
US7844959B2 (en) | 2006-09-29 | 2010-11-30 | Microsoft Corporation | Runtime optimization of distributed execution graph |
US8201142B2 (en) | 2006-09-29 | 2012-06-12 | Microsoft Corporation | Description language for structured graphs |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4930102A (en) * | 1983-04-29 | 1990-05-29 | The Regents Of The University Of California | Dynamic activity-creating data-driven computer architecture |
US4972314A (en) * | 1985-05-20 | 1990-11-20 | Hughes Aircraft Company | Data flow signal processor method and apparatus |
US5043873A (en) * | 1986-09-05 | 1991-08-27 | Hitachi, Ltd. | Method of parallel processing for avoiding competition control problems and data up dating problems common in shared memory systems |
US5438680A (en) * | 1988-04-29 | 1995-08-01 | Intellectual Properties And Technology, Inc. | Method and apparatus for enhancing concurrency in a parallel digital computer |
US5465372A (en) * | 1992-01-06 | 1995-11-07 | Bar Ilan University | Dataflow computer for following data dependent path processes |
US5483657A (en) * | 1992-02-26 | 1996-01-09 | Sharp Kabushiki Kaisha | Method of controlling execution of a data flow program and apparatus therefor |
US5675757A (en) * | 1988-07-22 | 1997-10-07 | Davidson; George S. | Direct match data flow memory for data driven computing |
-
1998
- 1998-12-11 EP EP98962086A patent/EP1058878A1/en not_active Withdrawn
- 1998-12-11 WO PCT/US1998/026436 patent/WO1999030230A1/en not_active Application Discontinuation
- 1998-12-11 AU AU17248/99A patent/AU1724899A/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4930102A (en) * | 1983-04-29 | 1990-05-29 | The Regents Of The University Of California | Dynamic activity-creating data-driven computer architecture |
US4972314A (en) * | 1985-05-20 | 1990-11-20 | Hughes Aircraft Company | Data flow signal processor method and apparatus |
US5043873A (en) * | 1986-09-05 | 1991-08-27 | Hitachi, Ltd. | Method of parallel processing for avoiding competition control problems and data up dating problems common in shared memory systems |
US5438680A (en) * | 1988-04-29 | 1995-08-01 | Intellectual Properties And Technology, Inc. | Method and apparatus for enhancing concurrency in a parallel digital computer |
US5675757A (en) * | 1988-07-22 | 1997-10-07 | Davidson; George S. | Direct match data flow memory for data driven computing |
US5465372A (en) * | 1992-01-06 | 1995-11-07 | Bar Ilan University | Dataflow computer for following data dependent path processes |
US5483657A (en) * | 1992-02-26 | 1996-01-09 | Sharp Kabushiki Kaisha | Method of controlling execution of a data flow program and apparatus therefor |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1152331A2 (en) * | 2000-03-16 | 2001-11-07 | Square Co., Ltd. | Parallel task processing system and method |
EP1152331A3 (en) * | 2000-03-16 | 2005-03-02 | Kabushiki Kaisha Square Enix (also trading as Square Enix Co., Ltd.) | Parallel task processing system and method |
GB2425868B (en) * | 2004-10-18 | 2007-07-04 | Manthatron Ip Ltd | Logic-based Computing Device and Method |
US7822592B2 (en) * | 2004-10-18 | 2010-10-26 | Manthatron-Ip Limited | Acting on a subject system |
US7844959B2 (en) | 2006-09-29 | 2010-11-30 | Microsoft Corporation | Runtime optimization of distributed execution graph |
US8201142B2 (en) | 2006-09-29 | 2012-06-12 | Microsoft Corporation | Description language for structured graphs |
Also Published As
Publication number | Publication date |
---|---|
EP1058878A1 (en) | 2000-12-13 |
AU1724899A (en) | 1999-06-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6535903B2 (en) | Method and apparatus for maintaining translated routine stack in a binary translation environment | |
US6546553B1 (en) | Service installation on a base function and provision of a pass function with a service-free base function semantic | |
US6226789B1 (en) | Method and apparatus for data flow analysis | |
US6502237B1 (en) | Method and apparatus for performing binary translation method and apparatus for performing binary translation | |
US6199095B1 (en) | System and method for achieving object method transparency in a multi-code execution environment | |
US6941561B1 (en) | Method and apparatus for remotely running objects using data streams and/or complex parameters | |
US8028298B2 (en) | Systems and methods for managing shared resources in a computer system | |
US7543301B2 (en) | Shared queues in shared object space | |
US20010037417A1 (en) | Method and system for dynamically dispatching function calls from a first execution environment to a second execution environment | |
Arvind et al. | A multiple processor data flow machine that supports generalized procedures | |
JPH1078873A (en) | Method for processing asynchronizing signal in emulation system | |
Armand et al. | Revolution 89 or ‘‘Distributing UNIX Brings it Back to its Original Virtues’’ | |
US20040123308A1 (en) | Hybird of implicit and explicit linkage of windows dynamic link labraries | |
Traub et al. | Overview of the Monsoon Project. | |
CA2167306C (en) | Multiple entry point method dispatch | |
WO1999030230A1 (en) | Naturally parallel computing system and method | |
Hunt et al. | Intercepting and Instrumenting COM Applications. | |
Zeigler et al. | Ada for the Intel 432 microcomputer | |
US20030220936A1 (en) | Software architecture for managing binary objects | |
Lee | PC-Choices object-oriented operating system | |
Aigner | Communication in Microkernel-Based Operating Systems | |
Richards | The BCPL Cintsys and Cintpos User Guide | |
Asthana et al. | Towards a programming environment for a computer with intelligent memory | |
Aubert | Quick Recipes on Symbian OS: Mastering C++ Smartphone Development | |
Pinkenburg et al. | Parallel I/O in an object-oriented message-passing library |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
NENP | Non-entry into the national phase |
Ref country code: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1998962086 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
WWP | Wipo information: published in national office |
Ref document number: 1998962086 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1998962086 Country of ref document: EP |