WO1999035548A2 - Object oriented processor arrays - Google Patents

Object oriented processor arrays Download PDF

Info

Publication number
WO1999035548A2
WO1999035548A2 PCT/US1999/000307 US9900307W WO9935548A2 WO 1999035548 A2 WO1999035548 A2 WO 1999035548A2 US 9900307 W US9900307 W US 9900307W WO 9935548 A2 WO9935548 A2 WO 9935548A2
Authority
WO
WIPO (PCT)
Prior art keywords
processor
object oriented
functional
message
processors
Prior art date
Application number
PCT/US1999/000307
Other languages
French (fr)
Other versions
WO1999035548A3 (en
Inventor
Jeffrey I. Robinson
Original Assignee
I. Q. Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/004,174 external-priority patent/US6052729A/en
Priority claimed from US09/003,684 external-priority patent/US6567837B1/en
Priority claimed from US09/003,993 external-priority patent/US6615279B1/en
Application filed by I. Q. Systems, Inc. filed Critical I. Q. Systems, Inc.
Priority to JP2000527869A priority Critical patent/JP2002542524A/en
Priority to AU24522/99A priority patent/AU2452299A/en
Priority to CA002317772A priority patent/CA2317772A1/en
Priority to EP99904036A priority patent/EP1121628A2/en
Publication of WO1999035548A2 publication Critical patent/WO1999035548A2/en
Publication of WO1999035548A3 publication Critical patent/WO1999035548A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4403Processor initialisation

Definitions

  • the invention relates to object oriented processors and processor systems. More particularly, the invention relates to an object oriented processor or processor system which utilizes a library of selectable processor objects in order to implement an array of processor objects.
  • the processors or processor system is preferably arranged such that the processor objects are self-instantiated in virtually any combination, and the processors or processor system preferably utilizes an event-reaction communication protocol through which processor objects communicate, and which is controlled by a high level scripting language.
  • Modern computers permit seemingly simultaneous execution of many operations by interrupting the microprocessor periodically to execute several software threads in turn.
  • the input from this peripheral to the microprocessor is seemingly simultaneously displayed by the microprocessor on a video display peripheral.
  • the microprocessor is interrupted periodically .from displaying output on the video display in order to obtain input from the keyboard. It is only, because the microprocessor operates at a very high speed that there is an illusion of simultaneity. In a more complex processing system, there may be many threads vying for microprocessor attention at any time.
  • peripheral devices For example, in a desktop multimedia computer, several peripheral devices must be controlled by the microprocessor in a seemingly simultaneous manner in order to produce the proper results and different operations such as displaying video and playing audio must be handled by separate threads.
  • the programming environment in a system having so many threads is incredibly complex.
  • the system software must be written to schedule microprocessor attention to each thread, assign priority to each thread and allow peripherals to interrupt the microprocessor at appropriate times. The system software must then schedule tasks for the microprocessor in response to the interrupts from various peripherals.
  • multiprocessor systems In order to relieve the host processor from performing every task, multiprocessor systems have been proposed. Some multiprocessor systems are successful in dividing tasks among processors when the tasks are well defined. For example, it is not uncommon to divide tasks between a data processor and a signal processor in systems which deal with signals and data in real time. It is more difficult to divide data processing tasks among several data processors. The operating system must decide which tasks will be performed by which processor and must schedule tasks so that processors do not remain idle while waiting for new tasks or while waiting for other processors to complete tasks so as to provide needed results. Consequently, there has been very little success in developing a general purpose multiprocessor system and there is no standard programming language for programming a multiprocessor system.
  • U.S. Patent Number 5,095,522 to Fujita et al. discloses an object-oriented parallel processing system which utilizes "concept objects" and "instance objects".
  • the system utilizes a host processor and a plurality of general purpose processors which are programmed by the host processor.
  • the host user must program (generate concept and instance objects) for each processor before parallel processing can begin.
  • Fujita et al. considers this aspect of their system to be a feature which allows dynamic changes in the functionality of each of the processors. However, this aspect of their system greatly complicates the host processor software.
  • U.S. Patent Number 5,165,018 to Simor describes a system in which "nodes" are provided with generic configuration rules and are configured at runtime via resource definition messages from the control node. Simor considers this aspect of his system to be an advantage which, among other things, "isolates the hardware from the software” and “allows programs to be written as if they were going to be executed on a single processor.” In addition, Simor's system permits programs to be “distributed across multiple processors without having been explicitly designed for that purpose.”
  • Fujita et al. and Simor utilize general purpose processors and attempt to isolate the hardware from the software, freeing the programmer to write code as if it were being executed on a single processor.
  • writing multithreaded code for a single microprocessor is a daunting task.
  • Fujita et al. nor Simor propose any solution to this problem.
  • This approach is based on the belief that writing and de-bugging code is more time consuming and more expensive than linking together processors which contain pre-written, bug- free code. This approach enables rapid system development, relieves the host processor of many scheduling tasks, simplifies de-bugging, enables cross-platform support, allows software emulation of hardware devices, as well as providing other advantages.
  • object oriented processors communicate with each other and/or with the host processor via the exchange of high level messages.
  • This earliest implementation of the communication protocol required that the host poll at least some of the object oriented processors (i.e. those responsible for processing input data) to determine the availability of data. This was eventually found to detract from the goal of simple coding as the host code had to be written in a manner that would scan all possible input sources on a frequent basis. It was eventually decided that this polling created an undesirable overhead and coding complication. Since many of the originally developed object oriented processors operated in real time, the polling scan rate could be high and thus the overhead could be substantial.
  • the early communication protocol did not provide information about the source of the data. This was instead derived by specific information requests by the host. Thus, several message exchanges might have been required before both data and source were determined.
  • each object oriented processor has a functionality which defines its physical connectability. More specifically, as embodied on a single chip, each object oriented processor (or collection of object oriented processors) presents a number of pins for coupling the processor to other devices. According to previously disclosed embodiments of the object oriented processors, the functionality of each pin is substantially fixed at the time the object oriented processor is manufactured. For example, as disclosed in related application Serial Number 08/525,948, a user interface controller utilizes thirty-seven pins, most of which have a set functionality. Several of the pins have alternate functionality. For example, pins A0 through A7 are an aux port.
  • pins Al and A2 can be used as LCD enable pins and pins A3-A7 can be used as LED enable pins.
  • the functional resources of the object oriented processors are pre-defined with respect to certain pins and cannot be substantially changed by the developer/user.
  • object oriented processor array means a collection of object oriented processors where each object oriented processor incorporates a separate hardware processor, or a collection of object oriented processors where each object oriented processor is embodied as a virtual processor sharing the same hardware processor, or any combination of discrete hardware processors and virtual processors.
  • Another object of the invention is to provide an object oriented processor array which utilizes memory in an efficient manner.
  • an object oriented processor array of the present invention includes a readable memory containing a library of configurable (programmable) functions (also referred to as objects) and a writable memory in which objects are instantiated and configured. More specifically, the object oriented processor array includes a system functionality (system object) which is automatically instantiated in writable memory at power-up, which calls other objects to be instantiated in writable memory in response to commands from a host processor or a boot ROM, and which maintains an active task hst and other information about instantiated objects.
  • the object oriented processor array according to the invention further includes a communications interface, an input message processor, and an output message processor.
  • the communications interface allows the object oriented processor array to communicate with other object oriented processor arrays and/or with a host processor or script server.
  • the output message processor preferably includes an output flow manager for handling messages from processor objects in the array and a central output registry for queuing messages.
  • the object oriented processor array is embodied as a virtual machine which is formed from software which runs on a microprocessor. Therefore, the software which embodies the object oriented processor array is provided with a timing kernel which simulates parallelism and a memory manager which allocates memory to objects when they are instantiated.
  • the library of functions is configured as a library of objects stored in ROM.
  • Each object includes a parser layer, a functional layer (which preferably includes a runtime layer and a background layer), a time layer, and an instantiation layer.
  • the system object is also stored in ROM and is automatically instantiated in RAM when the processor array is powered on, and in a preferred embodiment of the invention, reserves RAM for an active task list table (function pointers to instantiated objects), an active task list name table (the names of the instantiated objects), and an active task hst data space (pointers to the allocated memory blocks for each instantiated object).
  • the system object is similar to the other objects but handles global methods and functions which are common to all objects and essentially consists of a parser layer only. The primary function of the system object is to call on objects to instantiate themselves.
  • the system object In response to a high level command from a host processor (or a boot ROM), the system object calls the instantiation layer of an object in the object library and commands the object to instantiate itself in RAM.
  • the instantiation layer of the object calls the memory manager and requests an allocation of RAM.
  • the memory manager returns a pointer (to a starting address in RAM) to the object.
  • the object returns the pointer to the system object after performing any necessary initializations to complete instantiation.
  • the system object After the object informs the system object that instantiation was successful, the system object stores a pointer in the active task list table to the portion of the ROM in which the object resides. Each pointer in the active task list is associated with an index number.
  • the system object also stores the name of the instance of the object in the active task list name table which associates the name with the same index number, and stores the pointer to the allocated memory block in the active task list data space which is also associated with the same index number.
  • the system object then recomputes the scheduling of objects in the active task list.
  • each instantiated object stores the pointer to its allocated RAM in a reserved area of RAM.
  • the instantiated object arranges its allocated RAM in several parts.
  • the first part is the output message header which includes a pointer to the output buffer of the object instantiation, the message length, the active task list index, the flow priority of the object instantiation, the message type, and the source ID.
  • This first part is common in all objects, i.e. every instantiated object has this part in its allocated RAM.
  • the second part is private data used by the cell in the performance of its functionality. This second part may be different for each instantiated object and some objects may arrange additional parts of allocated RAM.
  • the input message processor checks the syntax of all incoming messages, buffers the message, examines the command and looks at the active task list name table to determine the index number for the instantiated object to which the command is directed, and passes the message and the index number to the parser layer of the object
  • the object parser layer interprets die pin assignment message and scores die pin assignment in its private data area of RAM for die named instantiation of the object
  • objects may be instantiated several times so long as enough hardware resources (pins and RAM) are available to support another instantiation.
  • the system object keeps track of all object instantiations by placing entries in the active task list table, the active task list name table, and the active task list data space table.
  • the memory manager maintains a pointer to the memory heap which is utilized and generates an error message if requested to assign more RAM than is available.
  • a message flow priority can be assigned.
  • a flow priority of 0-7 may be assigned where 0 represents polled priority and 1 -7 indicate increasing levels of importance.
  • the flow priority is stored by the instantiation of an object in its output message header pan of RAM.
  • the system object initiates a global variable or counter indicating die number of active tasks **). Each time an object is instantiated, this variable is incremented. When die variable is 0, the system object returns control to the timing kernel which scans the active task list Each ti e an object is instantiated, all active tasks are stopped and all instantiated objects arc called to their tinting layer and the tasks are scheduled.
  • the system object assigns an offset to each object instantiation which the timing layer stores in private data.
  • the object returns a worst case time to the system object and the worst case time Is used to calculate the offset for the next active task.
  • the time between the worst case and the actual time is advantageously used by die system object for system (background) functions; Le * system functions are not otherwise scheduled and therefore do not require overhead.
  • a message to a particular instantiation of an object is parsed by the input message processor which scans the active task list name table to find the index to the active task list the pointer to the obgect instantiation addressed.
  • the object receives the message ftom the input message processor and the index.
  • the object uses the index and the active ask list data space table to find which instantiation of itself is being addressed.
  • Messages ftom an instantiation of an object are placed in the output message header part of its RAM and a request for registration is sent to the central output registry.
  • the output regisvy maintains a queue of pointers to messages in output message header parts. The queue is scanned by one or more output flow managers which form output messages from the information held in the output message header parts.
  • one or more object oriented processor arrays are coupled to a central script server or host processor. Messages which result from data events in any of the processor objects are sent to the central script server for processing.
  • the central script server parses the messages it receives from processor objects and executes a script which has been written for this type of data event.
  • the script usually results in the sending of a message to another processor object on the same array or on a different array than the array containing the processor object having the data event.
  • the flow of messages is based on an event-reaction architecture.
  • the event-reaction architecture for message flow is a flexible method of assigning priority to multiple bus users which conserves bandwidth and simplifies host processor programming.
  • a processor object when a processor object has a message to send, it generates a data event which is registered with the target recipient of the message (usually the script server).
  • the target reacts to the event by allowing a variable amount of I O exchange between the processor object and the target prior to an acknowledgement of the data event.
  • the data event is acknowledged, no other data event may be sent to the target.
  • a fixed number of data events may be pending simultaneously.
  • each node is aware of network traffic and data events related to a particular target (script server) receiver are registered with all nodes which have that receiver as a target as well as with the target.
  • the number of data events which may be simultaneously pending at any time is determined by the target and known to each of the nodes.
  • the target arbitrates the flow of messages based on the registration of data events. Arbitration may be on a FIFO basis or the targeted receiver may grant priority to later generated data events which have a higher flow priority.
  • the output message buffers of the instantiated objects provide for a message flow priority.
  • the event reaction model permits the script server code to be linear with the number of threads in the script server code depending on the number of pending data events permitted.
  • a central hub arbitrates the exchange of messages among a number of object oriented processor arrays coupled to the hub by individual links rather than a common bus.
  • the hub communicates with each array according to the message flow priorities of the cells in the array.
  • the number of threads in the code on the hub depends on the sum of the number of pending data events permitted in each object oriented processor array.
  • one or more object oriented processor arrays are provided with local, internal script servers.
  • communications are controlled according to the event-reaction protocol by providing the output message processor with additional functionality to register and queue data events and an event script look up table to determine which events relate to internal messages.
  • the input message processor is provided with additional functionality to queue and buffer messages destined for objects in the array.
  • a high level language for communication between object oriented processors and a host processor during set-up and during operation.
  • the high level language according to the invention includes support for the event- reaction protocol, an efficient addressing scheme, better use of bandwidth and simplified host parsing.
  • the high level language messages are exchanged in packets of variable length with a well defined header.
  • the header includes addressing information, an indication of the type of message which follows, an indication of the method or data type, and an indication of the length of the remaining data in the packet.
  • the high level language is self-aligning, simple, and robust.
  • the object oriented processor arrays according to the invention are embodied in many alternative constructions including running software on a microprocessor, field programmable gate arrays, multiple microprocessors, and other hardware devices.
  • Figure 1 is a schematic block diagram of the major hardware components of an object oriented processor array according to the invention
  • Figure 2 is a schematic block diagram of the major functional components of an object oriented processor array according to the invention.
  • Figure 3 is a schematic memory map of the writable memory area in the object oriented processor array of Figures 1 and 2;
  • Figure 4 is a flow chart illustrating the basic steps in the initialization, setup, and operation of the object oriented processor array of Figures 1 and 2;
  • Figure 4a is a flow chart illustrating the basic functions and operation of the system object
  • Figure 4b is a flow chart illustrating the basic functions and operation of the memory manager
  • Figure 4c is a flow chart illustrating the basic functions and operation of the timing kernel, active objects, and system object with regard to scheduling;
  • Figure 5 is a schematic memory map of the writable memory area in an alternate embodiment of an object oriented processor array
  • Figure 6 is a schematic flow chart illustrating the steps in the setup programming of the alternate embodiment
  • Figure 7 is a schematic flow chart illustrating the operational mode of the alternate embodiment
  • Figure 8 is a schematic block diagram of an object oriented processor array according to the invention coupled to a host processor and a power supply;
  • Figure 9 is a schematic block diagram of an implementation of an object oriented processor array to control a "smart telephone";
  • Figure 10 is a schematic block diagram of the major functional components of an object oriented processor array according to a second embodiment of the invention;
  • Figure 11 is a schematic block diagram of an object oriented processor array according to the invention utilizing multiple microprocessors.
  • Figure 12 is a schematic diagram generally illustrating the implementation of the object oriented processor array onto any hardware device or devices.
  • an object oriented processor array 10 includes a readable memory 12, a writable memory 14, one or more programmable processors 16 coupled to the memory 12 and 14, and a real world interface such as a number of pins 18 which are coupled to the processor(s) 16.
  • a readable memory 12 a writable memory 14
  • programmable processors 16 coupled to the memory 12 and 14
  • a real world interface such as a number of pins 18 which are coupled to the processor(s) 16.
  • one embodiment of the invention resides on a single chip which includes a single general purpose microprocessor, RAM and ROM and which has a number of pins.
  • the readable memory 12 contains a library 20 of configurable (programmable) functions (also referred to as objects) which are instantiated and configured in the writable memory 14 as described in detail below.
  • the object oriented processor array 10 includes a system object 22 which is automatically instantiated in writable memory at power-up, which calls other objects from the library 20 to be instantiated in writable memory 14 in response to commands from a host processor or a boot ROM as described in more detail below.
  • an object Once an object has been instantiated, it appears as an active object, e.g. 23a, 23b, 23c, in the processor array 10.
  • the object oriented processor array 10 further includes a communications interface 24, an input message processor 26, and an output message processor 28.
  • the communications interface 24 allows the array of active objects 23a-23c to communicate with other object oriented processor arrays and/or with a host processor or script server via a communications link or bus 25 (which may be in the form of a physical bus, a multi-ported memory or even a radio link).
  • the communications interface also allows the system object 22 to receive commands from a host or boot ROM.
  • the input message processor 26 is responsible for routing and basic syntax parsing of incoming messages. Once the message is received and deemed syntactically correct, it is routed to the parser layer of the addressed object as discussed below.
  • the output message processor 28 preferably includes an output flow manager 32 for handling messages from active objects in the array 10 to processors external of the array 10, and a central output registry 34 for queuing messages. All input to the output message processor 28 is through the central output registry 34.
  • the object upon the occurrence of an event within an object, the object calls the central registry 34 and provides a handle to a standard structure which is entered into the output queue.
  • the output queue is scanned by the flow managers which look for information on the output queue and the priority the object has been assigned, if any.
  • a flow manager determines which object has subsequent use of the port, it condsructs a message using information in the standard structure which determines the message type (e.g. data event, command ack, etc.), the name of the object that originated the message, the type or source of the data, and any data associated with the message which is derived by referencing a pointer.
  • the newly composed message is then sent to the output port and transmitted.
  • the object oriented processor array 10 is embodied as a virtual machine which is formed from software which runs on a microprocessor. Therefore, the software which embodies the object oriented processor array 10 is provided with a timing kernel 36 which simulates parallelism and a memory manager 38 which allocates memory to objects when they are instantiated. It will also be appreciated that when the object oriented processor array 10 is embodied as a virtual machine, the interconnections among the elements shown in Figure 2 are not physical connections, but rather indicative of the functional relationships among the elements.
  • each object e.g. 23c, includes a parser layer 40, a functional layer 42, a time layer 44, and an instantiation layer 46.
  • the parser layer contains the intelligence to interpret the vocabulary pertinent to the particular object and to perform tasks which can be performed immediately.
  • each object has a certain predefined functionality which is configurable.
  • the vocabulary for each object therefore, will be governed somewhat by the functionality which the object contains and will also include some general purpose vocabulary for communication. Examples of how the vocabulary of each object may be different is shown in related application Serial Number 08/525,948.
  • the parser layer is the means by which an instantiated object is initialized and configured, and the means by which the object receives messages.
  • the functional layer 42 contains all of the intelligence needed to carry out the predefined functionality of the object and is preferably divided into a runtime layer and a background layer.
  • the runtime layer contains the functionality which needs to be executed on a continual basis and the background layer contains the functionality for relatively low priority tasks. For example, if the object is coupled to an input device, scanning the input device would be part of the runtime layer.
  • the functions performed in the background layer usually take a long time to execute such as low speed communications dialog with a device socket.
  • the time layer 44 participates in the scheduling of the runtime layer by providing information to the system object 22 about the dynamic performance and behavior of the particular object as described more fully below.
  • the instantiation layer 46 performs the tasks needed to instantiate the object when called by the system object as described more fully below.
  • the system object 22 is also preferably stored in ROM and is automatically instantiated when the processor array 10 is powered on.
  • the instantiation of the system object 22 includes reserving a portion 14a of RAM 14 for itself and reserving portions of RAM 14 for an active task list table 14b (function pointers to instantiated objects), an active task list name table 14c (the names of the instantiated objects), and an active task list data space 14d (pointers to the allocated memory blocks for each instantiated object).
  • the system object 22 is similar to other objects but handles global methods and functions which are common to all objects (e.g., turning exception on/off, returning shell status, etc.
  • - shell status includes, e.g., the number of pending events, the number of pending acknowledgements, the number of instantiated objects, the number of communication errors, etc.) and essentially consists of a parser layer only.
  • the primary function of the system object is calling other objects to be instantiated.
  • the initialization, configuration and operation of the object oriented processor array begins when power is applied to the array as shown at 200 in Figure 4.
  • the system object Upon power on, the system object is automatically instantiated as shown at 202 in Figure 4.
  • the timing kernel 36 scans the active task list table 14b as shown at 206 in Figure 4. Initially, however, the only operation which occurs after the system object is instantiated is the receipt of a command message to instantiate an object.
  • the input message parser 26 checks the syntax of the message and determines at 210 whether the message is for the system object. Although not shown in Figure 4, if the syntax of the message is incorrect, the input message processor 26 will prepare an error message which is queued in the output registry 34. If it is determined at 210 that the incoming message is for the system object (i.e. a command to instantiate an object), the input parser passes the command to the system object which then checks for hardware resource availability at 212 and determines whether sufficient pins are available to instantiate the object called for in the command.
  • the system object interrogates the instantiation layer of the object to determine what resources are needed to instantiate the object and then determines whether sufficient resources (e.g. pins and memory). If the system object determines at 212 that (because of other object instantiations which preceded this one) there are not enough pins to instantiate the object, it buffers an error message and sends a pointer to the output registry at 214 to return an error message to the host. Control is then returned to the timing kernel which scans the active task list at 206.
  • sufficient resources e.g. pins and memory
  • the system object calls at 216 the instantiation layer of the object in the object library and commands the object to instantiate itself in RAM 14.
  • the instantiation layer of the object calls (at 218 in Figure 4) the memory manager 38 and requests an allocation (e.g. 14e) of RAM 14.
  • the memory manager checks for the availability of RAM at 220 and if insufficient memory is available, sends an error message at 214. If enough RAM is available, the memory manager 38 returns a pointer at 222 (to a starting address in RAM) to the instantiation layer which receives the pointer and arranges its memory at 224.
  • the memory manager also increments at 222 a heap pointer which is used by the memory manager to determine at 220 whether sufficient RAM is available for other instantiations.
  • the instantiation layer 46 After the instantiation layer 46 successfully completes instantiation, it informs the system object 22 that instantiation was successful and sends the pointer to the system object at 226.
  • the instantiation layer arranges (at 224) its allocated RAM into organized parts.
  • the first part is the output message header which includes a pointer to the output buffer of the object instantiation, the message length, the active task list index, the flow priority of the object instantiation, the message type, and the source ID. This first part is common to all objects, i.e. all instantiated objects arrange part of their allocated RAM in this manner.
  • the system object 22 stores a pointer in the active task list table 14b which points to the portion of the ROM where the object resides.
  • Each pointer in the active task list table is associated with an index number and the index number for the pointer is provided by the system object to the instantiation layer which stores the index number in the portion of the RAM it has configured for storage of static variables.
  • each object has a functional name (which refers to the object in ROM) and an instantiated name (which refers to the instantiation of the object).
  • the instantiated name is given as part of the high level command to the system object at the beginning of instantiation.
  • the system object 22 also stores the instantiated name of the object in the active task list name table 14c which associates the name with the same index number as the pointer to ROM, and stores the pointer to the allocated block of RAM in the active task list data space 14d which is also associated with the same index number.
  • the system object 22 then recomputes the scheduling of objects in the active task list table 14b. More particularly, each time an object instantiation is completed at 228 in Figure 4, all active tasks are stopped as shown in Figure 4 at 230 and all instantiated objects are called to their time layer. Each instantiated object returns a worst case time to the system object at 232 and the worst case time is used to calculate an offset for each active task (each instantiated object includes at least one active task). The system object 22 assigns the offset to each object instantiation which the time layer stores in private data at 234. The time between the worst case and the actual time is advantageously used by the system object for system (background) functions; i.e. system functions are not otherwise scheduled and therefore do not require overhead. After rescheduling in this manner is completed, the system object returns control to the timing kernel which resumes scanning the active task list at 206.
  • system (background) functions i.e. system functions are not otherwise scheduled and therefore do not require overhead.
  • pins may be assigned to the instantiated object (if necessary) by sending command messages directly to the instantiated object.
  • functionality of a particular object may include performing certain input or output tasks which require a physical connection to an external device such as a keyboard or a display.
  • some objects may have functionality which only requires communication with other objects and/or with a script server (as described below) in which case pins do not need to be assigned to the instantiated object.
  • messages to the instantiated object are addressed to the instantiated name of the object.
  • the input message processor 26 checks the syntax of the message, buffers the message, and examines the message to determine at 210 if the message is for the system object. If the message is not for the system object, it will be addressed to a named instantiation of an object. The input message processor looks at 236 for the named instantiation in the active task list name table 14c to determine the index number of the instantiated object to which the command is directed. Although not shown in Figure 4, if the name cannot be found in the active task list name table, an error message will be prepared and queued with the output registry.
  • the input message processor also scans at 236 the active task list table 14b using the index number to find the pointer to the portion of ROM which contains the layers of the object.
  • the input message processor then forwards at 238 the message and the index number to the parser layer of the object.
  • the parser layer of the object uses the index number to determine which instantiation of the object is being addressed and to find the pointer to the appropriate portion of RAM.
  • the parser layer also interprets the message and determines at 240 whether the message is a configuration message, e.g., to assign pins or to set a flow priority. If it is determined at 240 that the message is a configuration message, the configuration data is stored at 242 in the appropriate portion of RAM.
  • the parser layer of the object determines at 240 that the message is not a configuration message, the message is processed at 244 by the functional layer of the object and control returns to the timing kernel to scan the active task list.
  • the functional layer of instantiated objects also may generate messages which need to be sent to another object or script server outside the array 10. Messages from an instantiation of a cell are placed in the output message header part of its RAM and a request for registration is sent to the central output registry 34.
  • the output registry 34 maintains a queue of pointers to messages in output message header parts. The queue is scanned by one or more output flow managers. As shown in Figure 4, when an outgoing message is determined at 246 to be in the queue, the output flow manager reads at 248 the highest priority pointer in the queue.
  • the pointer points to the output message header part of RAM used by the instantiation of the object which prepared the message.
  • the output flow manager uses the data there to prepare and send messages at 250 in Figure 4, or to send a "data event” as described in more detail below with reference to the "event-reaction” messaging protocol.
  • a message flow priority can be assigned.
  • flow priority may be assigned before assigning pins.
  • a flow priority of 0-7 may be assigned where 0 represents polled priority and 1-7 indicate increasing levels of importance. When the priority level is greater than zero, output messages from the instantiated object will be generated autonomously.
  • the flow priority is stored by the instantiation of an object in its output message header part of RAM.
  • the system object instantiates itself automatically when power is applied to the object oriented processor array.
  • the operations of the system object are shown in greater detail in Figure 4a.
  • the system object seizes a pre-assigned portion of RAM for its use, shown at 1200 in Figure 4a.
  • the system object After performing low level diagnostics of the object oriented processor array at 1202, the system object starts the timing kernel at 1204.
  • the host or the boot ROM may send global configurations to the system object, shown at 1206, such as "enable exception reporting", etc.
  • the system object then waits at 1208 for a command from the host to instantiate an object.
  • the system object Upon receiving a command to instantiate an object, the system object examines the hardware resources (e.g. memory available) in the object oriented processor array to determine at 1210 whether there are sufficient resources available to instantiate this particular object. It will be understood that the system object need not be provided with the knowledge of the hardware requirements of each of the objects in the object library. If insufficient resources are available, the system object sends an error message (if exceptions are enabled) at 1212 and returns at 1208 to await a command to instantiate an object. It will be understood that in a fully developed application, e.g.
  • the system object calls the instantiation layer of the specified object at 1214.
  • the instantiation layer performs the tasks described above with reference to Figure 4 at reference numerals 218 through 226 and returns its memory pointer to the system object which is shown in Figure 4a at 1216 where the system object receives the pointer.
  • the system object then writes to the active task lists at 1218 as described above with reference to Figure 4 at reference numeral 228.
  • the system object takes control from the timing kernal at 1220 and calls the timing layers of all instantiated objects at 1222. As explained in further detail below with reference to Figure 4c, the timing layers report their worst case times at 1224 and the system object calculates an offset value for each instantiated object at 1226. These values are given to the objects as described above with reference to Figure 4 at reference numeral 234. The system object then turns control back over to the timing kernel at 1228 and returns to 1208 to await any further commands to instantiate objects.
  • the memory manager keeps track of available RAM during the instantiation processes. More specifically, as shown in Figure 4b, after the system object is self- instantiated, the memory manager, at 2200, reads the total memory amount of the object oriented processor array and sets the heap pointer to the next available protion of RAM beyond the portion already occupied by the system object. The memory manager then waits at 2202 for a request to assign RAM. When the memory manager receives a request to assign RAM from the instantiation layer of an object, it examines the request at 2204 to determine the amount of RAM requested. The memory manager finds the amount of RAM currendy available at 2206 by subtracting the heap pointer from the total memory amount.
  • the memory manager decides at 2208 if the request for RAM can be fulfilled by comparing the amount requested to the amount available. If there is insufficient RAM, the memory manager sends an error message at 2210 to the instantiation layer of the object. It will be understood that the error reporting is only used when an application is being developed. If there is sufficient RAM, the memory manager assigns RAM to the requesting object at 2212 by giving it the current location of the heap pointer. The memory manager then adjusts the location of the heap pointer by adding the requested amount of RAM to the pointer at 2214, thereby moving the pointer to the start of the portion of RAM beyond that now occupied by the instantiated object. The memory manager then returns to 2202 to await another request for RAM.
  • the system object is not allocated any specific operation time by the timing kernel, nor does the system object allocate any time for its own use when performing scheduling tasks as described above with reference to Figure 4 at 232 and 234. More particularly, as shown in Figure 4c the timing kernal initializes at 3200 (when started by the system object as described above with reference to Figure 4a at reference numeral 1204). If a new object has been instantiated at 3202, the system object takes control from the kernel at 3203 as described above with reference to Figure 4a at reference numeral 1220. The system object collects the worst case times from all instantiated objects at 3204 as described above with reference to Figure 4a at reference numeral 1222 and 1224.
  • the system object then totals all of the worst case times and also adds a predefined time allocated for shell operations (i.e communications and message processing) at 3206. The times are given in terms of the system clock which will depend on the clock frequency.
  • the System object then pro-rates timer interrupts for each of the active objects and the shell at 3208.
  • the system object allocates the system clock time in the form of offsets which will be used by the timing kernel to allocate time to each active object and the shell.
  • Each active object stores its assigned offset at 3212 in a portion of its allocated RAM.
  • the system then returns control to the timing kernel at 3214.
  • the timing kernel scans the active task list at 3216.
  • the timing kernel will allow the processor to devote a number of clock cycles to the task of that object at the appropriate time which is based on the offset for that object as shown at 3218.
  • the object may not need all of the clock cycles assigned to it. For example, if the object is an input processor coupled to a keyboard or an encoder and no input activity is taking place, the object will be idle.
  • the time is given to the system object at 3220. When the system object is given time at 3220, it looks at 3202 to see if there has been a new object instantiated. If no new object has been instantiated, the system object looks at 3222 to see if there is a command to instantiate a new object.
  • the system object calls the object's instantiation layer at 3224 as described above. If there is no command to instantiate at 3222, the timing kernal continues to scan the active task list at 3216.
  • the object oriented processor array according to the invention may utilize memory and instantiate objects in a slightly different manner, according to an alternate embodiment.
  • the memory 314 is arranged in a slightly different manner, i.e. there is a reserved area of memory 314e in which instantiated objects store pointers as described below.
  • the provisioning of this reserved area of memory obviates the need for an active task list name table or an active task list data space table, and only the active task list 314b is needed. However, provisioning of this reserved area can be a waste of memory which never gets used.
  • the system object when the object oriented processor array is powered on at 300 in Figure 6, the system object automatically instantiates itself at 302 in Figure 6 in a portion 314a of RAM as shown in Figure 5.
  • the system object also reserves a portion of RAM 314b for maintaining an active task list, a list of pointers to objects in the object library.
  • the object oriented processor array is thus in a condition to receive high level commands from the host or boot ROM.
  • An exemplary configuration command from the host to the object oriented processor array takes the form (zF(ENC4) ⁇ , where z is the address of the system object, F is the command to instantiate, and ENC4 is the name of a object in the object library, i.e.
  • the names (addresses) of the different objects are given functionally, e.g. LCDT (text LCD controller), ENC4 (4-wide encoder), KB44 (4x4 keypad controller), etc.
  • LCDT text LCD controller
  • ENC4 4-wide encoder
  • KB44 4x4 keypad controller
  • the input message processor checks the syntax of the command at 306 in and passes the command to the system object.
  • the system object sends a call at 308 to the instantiation layer of the object "ENC4" and tells the object "ENC4" to instantiate itself.
  • the instantiation layer of the object "ENC4" checks at 310 its predefined area of reserved memory 314e for prior instantiations of "ENC4" and determines at 312 whether there are sufficient hardware resources available for an(other) instantiation.
  • each object in the library is provided with a pre-coded address to a small block of RAM (a portion of 314e in Figure 5) which is thus reserved for its use in keeping track of instantiations. If it is determined at 312 that insufficient resources are available, an error message is sent which is received by the output registry at 313 and forwarded to the host which receives an error message at 316.
  • the instantiation layer of "ENC4" calls the memory manager and requests at 318 an allocation of RAM sufficient for its needs.
  • the memory manager maintains a pointer to the next byte of available memory in the "heap” as well as the address of the end of the heap.
  • the memory manager subtracts "n-bytes” from the end of heap address and compares the result to the heap pointer to determine at 320 whether there is enough RAM available. If sufficient RAM is not available, an error message is sent at 322 to the output registry which passes the message to the host.
  • the memory manager assigns the pointer to the instantiation of "ENC4" and increments the heap pointer by n-bytes.
  • the instantiation layer of "ENC4" receives the pointer at 326 and writes the pointer at 328 to its block of reserved memory in 314e. As illustrated in Figure 6, the pointer points to the start of memory block 314c.
  • the object "ENC4" allocates a portion of the RAM space assigned to it for output message headers and another portion of the RAM assigned to it for "private data”.
  • the task dispatcher in the system object stores a pointer at 330 to the object "ENC4" in the active task list.
  • the position in the active task hst is used as the instantiation name of the instantiation of the object. For example, if the active task hst has six entries (a-f), the first instantiated object will have the instantiation name "a", the second "b", the third "c", etc. Further communications with an instantiation of a object will utilize this name.
  • pins can be assigned to it by the host using the command language according to the invention.
  • a command of the form ⁇ aP(B) ⁇ from the host to the object oriented processor array is directed to the instantiated object having the name "a” and utilizes the command P to assign pins where the parameter B is the location of the pins assigned to "a".
  • a command is issued at 332.
  • the input message processor checks the message at 334 for correct syntax and will generate an error message to the host if the syntax is incorrect. Based upon the address "a", the input processor will look at 336 for "a” in the active task hst and direct the message to object "ENC4". According to this alternate embodiment, "a” would be the first pointer in the active task list, "b” would be the second, etc.
  • the pointer at "a” points to the object "ENC4" and the input processor will therefore forward the message at 338 to the object "ENC4".
  • the object "ENC4" receives the addressed message and scans its reserved memory area at 340 in Figure 6 to find the pointer to the assigned workspace of the named instantiation of itself.
  • the pin numbers are then stored at 342 by the object "ENC4" in the private data area of instantiation "a". Once the pins have been assigned to the instantiation "a" of the object "ENC4", the instantiation is operational and the pins are functioning with a default flow priority of zero.
  • the timing of all tasks is recalculated.
  • the system object stops all tasks at 344 and calls the timing layer of all instantiations of objects.
  • the instantiations respond at 346 with a worst case time to the system object and the worst case time is used to calculate the offset for the next active task.
  • Each instantiation of a object stores at 348 its offset in the private data area of its assigned RAM. The time between the worst case and the actual time used by each object instantiation is used by the system object for background tasks.
  • the system object returns at 350 to scanning the active task list which is described in more detail below with reference to Figure 7.
  • timing may be recalculated in response to a host command at 352 to assign a particular priority level to a particular object instantiation.
  • the scheduling of priorities may is performed by a scheduler which may be considered a part of the system object or a separate entity.
  • the task dispatcher of the system object continually scans the active task list starting at 400 and periodically checks at 402 whether a new object instantiation has been added.
  • the scheduler sets timers for tasks based upon their priority and background tasks are completed when extra time is available. Therefore, the order of operations shown in Figure 7 is not necessarily the order in which operations will be scheduled by the scheduler. If a new object instantiation has been added, the procedure described above (344-350 in Figure 6) is performed at 404 in Figure 7 and the system object then returns to scanning active tasks. This includes monitoring the output message processor and the input message processor to determine whether messages need to be delivered. For example, at 406 it is determined whether an incoming message is pending (in the buffer of the input processor).
  • the input processor examines the active task list at 408 to determine the object for which the message is addressed and passes the message to the object.
  • the parser layer of the object examines, at 410, its preassigned reserved memory to determine which instantiation of itself should receive the message and passes the message to the appropriate layer (functional, timing, or instantiation) of the object for processing as an active task.
  • the system object then checks at 412 whether an outgoing message is in the queue of the of the output registry. If there are messages in the queue, the output message processor reads the highest priority pointer in the output registry at 414. The pointer in the queue points to the output message header of the object generating the message.
  • the output message headers contain pointers to output buffers in RAM as well as an indication of the type of data to be sent, and a flag to indicate whether the output message header has been queued, etc.
  • the output message former uses the output message header and the data to create a message for output onto the network sends the message and then drops the queue flag.
  • An example of a message format according to this alternate embodiment is
  • To Array is the processor address of the recipient
  • toobj is the object address of the recipient
  • Method is a function
  • data is data
  • the last from fields indicate the address of the sender. If the from addressing is blank, the host is the sender. The system object then resumes scanning the active task list at 400.
  • a presently preferred embodiment of the object oriented processor array 10 is contained on a single chip having a plurality of pins, e.g., pO through p39.
  • Three pins, e.g. pO, pi, and p2 are preferably reserved for a communications link with a host processor or script server 50 and additional object oriented processor arrays, e.g. 10a- 10c, via a network link 25; and two pins, e.g. p38 and p39 are preferably reserved for connection to a DC power source 52.
  • the three pins are used to implement the communications network described in co-owned, co-pending application number 08/645,262, filed May 13, 1996, the complete disclosure of which is incorporated by reference herein.
  • only two pins will be needed to support the link and one may use the point-to-point communication methods disclosed in co- owned, co-pending application number 08/545,881, filed October 20, 1995, the complete disclosure of which is incorporated by reference herein.
  • the script server may also be coupled to other conventional microprocessors and/or peripheral devices to create a system which includes both distributed object oriented processing with traditional microprocessor systems.
  • the host processor 50 is used to configure the object oriented processor arrays 10, lOa-lOc utilizing a high level command language and is preferably also used to communicate with the object oriented processor arrays during normal operations.
  • the host processor acts as a central script server and all messages generated by an object oriented processor array are sent to the script server for processing. More particularly, when an instantiated object in one of the arrays 10, lOa-lOc has data to send to another object, the data is first sent to the script server 50 and a program running on the script server determines the destination for the data. The program on the script server may also manipulate the data before sending it on to another object.
  • communications between object oriented processor arrays 10, 10a- 10c and the script server are managed according to an "event-reaction" model.
  • the script server may be required to participate in many concurrent dialogs with the several object oriented processor array.
  • Each concurrent dialog requires a separate thread in the script server.
  • the event-reaction model relieves the developer from the task of writing complicated multithreaded script server code.
  • objects in an object oriented processor array which are coupled to input devices can generate data events at a rate faster than the script server can (or desires to) process them. For example, the rotation of a rotary encoder may cause a data event every time the encoder is advanced one detent. This could result in a significant amount of redundant computation and communication.
  • the event-reaction model solves this problem by allowing the script server to control when data is sent to it.
  • an object within an array 10 when it has a message to send, it generates a data event.
  • the data event is registered with the script server 50 and with the output message processors of each array 10, lOa-lOc which is coupled to the script server 50 via the bus 51.
  • the script server 50 reacts to the data event by allowing a variable amount of I/O exchange between the object and the script server prior to sending an acknowledgement of the data event onto the bus 25.
  • no other data event may be sent to the script server 50 from any object on any of the arrays 10, lOa-lOc.
  • a fixed number of data events may be pending simultaneously.
  • the number of data events which may be pending simultaneously is determined by the script server 50, and each of the arrays is configured when they are initialized by the script server so that their output message processors know how many pending events are allowed at any one time.
  • the output message processor in each array keeps a counter of the pending data events. When a data event is generated, the counter is incremented, and when a data event is acknowledged, the counter is decremented.
  • each processor array When multiple object oriented processor arrays are coupled to a central script server by a bus, each processor array is aware of network traffic, and data events related to a particular target (script server) receiver are registered with all arrays. The number of data events which may be simultaneously pending at any time is determined by the target and is known to each of the arrays.
  • the target arbitrates the flow of messages based on the registration of data events. Arbitration may be on a FIFO basis or the targeted receiver may grant priority to later generated data events which have a higher flow priority (if the number of allowed pending data events is more than one).
  • the output message buffers of the instantiated objects provide for a message flow priority.
  • the event reaction model permits the script server code to be linear with the number of threads in the script server code depending on the number of pending data events permitted.
  • an object with data to send to the script server sends a [dataEvent] to the script server and all other objects capable of sending messages to the script server now must refrain from sending data events.
  • the script server When the script server is ready to receive the data from the object which sent the data event at tl, it issues a ⁇ cmdEvent at t2.
  • the object addressed responds at t3 with a JcmdAck which also contains some of the data it has to send.
  • the script server requests at t4 that the object send the remainder of its data.
  • the object sends the remainder of the data at t5 and the script server acknowledges that all data was received at t5+n by sending a [dataAck].
  • the number of polled data packets between the data event at tl and the data acknowledge at t5+n is variable depending on the timing of the system, the other tasks being handled by the script server, and the amount of data the object needs to send.
  • the system can also include timers which causes a retransmittal of the original data event if the server does not respond within a certain time and/or which clear any pending events so that the event generator can be re-enabled.
  • a first data generator sends a data event at tl and a second data generator sends a data event at t2.
  • [dataEvent2 all data generators are prohibited from sending data events until a ]dataAck is generated.
  • the server polls the first data generator for data at t3 and the first data generator acknowledges the server at t4, and sends the data.
  • all of the data is sent (which could require several ⁇ cmdEvents and ⁇ cmd Acks)
  • complete receipt of the data is acknowledged at t5. All data generators are now free to send a data event.
  • the server begins polling the second data generator.
  • the first (or another) data generator sends another data event at t7. Again, with two pending events, all data generators are prohibited from sending additional data events.
  • the second data generator sends its data at t8, the complete receipt of which is acknowledged at t9. Now with only a single data event pending in the script server, all data generators are now free to send a data event.
  • the server may implement a time division multiplexing scheme to allow both data generators to send data "simultaneously". It will be appreciated that data generators which are prohibited from sending data are still permitted to perform local operations and that they may be provided with buffers to store locally generated data until the sever permits them to send their data. High Level Language
  • the ⁇ > indicate bytes, the () are literal characters and the ... shows that the number of data bytes may vary.
  • the array address is five bits and for point to point communications, the array address is set to zero.
  • the message type is three bits and presently six message types are defined. These include (1) exception, (2) data acknowledgement, (3) data event, (4) command acknowledgement, (5) polled response, and (6) command event.
  • the object name is eight bits and is presently limited to ASCII characters @ through Z.
  • the method/data type field is eight bits and is limited to ASCII characters a through z.
  • This field is interpreted as a method when the message type is a command event or a command acknowledgement.
  • the data length is eight bits and does not include the left and right parenthesis or the data length itself.
  • the acknowledgement of commands is mandatory and the acknowledgement must echo the address fields of the command.
  • an exemplary application of an object oriented processor array 10 is a "smart telephone accessory".
  • the object oriented processor array 10 is provided with a speech recognition processor object 23a, a speech messaging processor object 23b, a caller ID processor object 23c, and a DTMF dialer 23d.
  • Each of these processor objects is contained in the object librtary 20 and is self-instantiated in response to calls from the system object 22 which receives commands from a boot ROM 51 as described above. It will also be appreciated that the boot ROM also includes commands to assign pins to the objects 23a-23d.
  • two pins assigned to the speech recognition object 23a are coupled to an external microphone 53, and two pins assigned to the speech message object 23b are coupled to an external speaker 55.
  • two pins assigned to the caller ID object 23c and the DTMF dialer object 23d are coupled to an RJ-11 plug or jack.
  • the smart telephone accessory application is designed to respond to voice commands by dialing phone numbers associated with particular recognized names.
  • this application is also designed to respond to incoming calls by announcing the name of the person calling.
  • the script server 51 is provided with scripts which are executed in response to a data event from either the speech recognition object 23 a or the caller ID object 23c. An example of a script executed in response to a data event from the speech recognition object 23a is shown below in Table 3.
  • the script server looks up the recognized name in a database to find the telephone number associated with the name, in this case "HOME”. The script server then sends a command to the DTMF dialer to dial the number associated with HOME in the database. In addition, the script server sends a command to the speech message object to play three messages, e.g. "I am”, "calling", "home”.
  • the script server looks up the phone number in a database to determine the caller's name.
  • the script server constructs a message to play, e.g. "Jeff is calling", and sends the message to the speech message object.
  • one or more object oriented processor arrays are provided with local, internal script servers and no central script server or host processor is needed.
  • communications are controlled according to the event-reaction protocol by providing the output message processor with additional functionality to register and queue data events and an event script look up table to determine which events relate to internal messages.
  • the input message processor is provided with additional functionality to queue and buffer messages destined for objects in the array.
  • an object oriented processor array 110 has many similar components as the array 10 described above and the similar components are labelled with similar reference numerals incremented by 100.
  • the array 110 includes an object library 120, a system object 122, and, after initialization, a number of active objects 123a, 123b, etc.
  • the array also has a communications receiver 124a and a communications transmitter 124b, both of which are coupled to an external bus or link 125 for coupling the array 110 to other similar arrays.
  • Messages bound for objects within the array are routed by an input router 126c which receives the messages from a buffer 126b.
  • the buffer 126b buffers messages received from the global input parser 126a and the internal message interface queue 130.
  • Messages which originate external of the object oriented processor array are received by the communications receiver 124a and passed to the global parser 126a. Messages which originate within the array are received by the internal message interface queue 130 which receives the messages from the data event processor 135.
  • the data event processor 135 also provides input to the output queue 132 for transmission to objects external of the array by the transmitter 124b.
  • the data event processor 135 receives input from the pending output event queue 133 and a programmable table of event scripts 137. As in the first embodiment, messages from objects in the array are registered in an output registry 134.
  • the source address of a data event is read by the data event processor 135 from the pending output event queue 133 and used to look up a script associated with that address in the script table 137.
  • the script is executed by the data event processor 135 and typically results in the generation of a message.
  • the message generated by the script may be destined to an object within the array or an object external of the array. In the former situation, the message is passed to the internal message interface queue. In the latter situation, the message is sent to the output queue 132.
  • the data event processor and the script table function as a local script server within the array and there is no need for a central script server. It should be noted, however, that a central script server may be used with the array 110. Messages will be sent to the central script server when no script for the data event appears in the script table, or when the script associated with the data event causes a message to be sent to the central script server.
  • This embodiment of the object oriented processor array may be embodied on a single chip using one or more microprocessors with associated RAM and ROM.
  • the array 110 will also include a memory manager 138 and a timing kernel 136.
  • the script table may be stored in either RAM or ROM and may be initialized (programmed) at startup by either a boot ROM or by a host processor.
  • an object oriented processor array 510 is functionally similar to the object oriented processor array 10 shown in Figure 2.
  • the microprocessor 521 is used to handle all I/O functions with respect to the array 510.
  • the microprocessor 521 therefore provides the functionality of the communications interface 524, the input message processor 526, the internal output flow manager 530, the external output flow manager 532, and the central output registry 534.
  • the microprocessor 522 provides the functionality of the system object and the microprocessors 523a- c... provide separate processors for each instantiation of an object from the object library 520.
  • the library contains software for each object and an object is instantiated by loading software into one of the processors 523a, etc.
  • the invention has been described primarily with reference to an embodiment utilizing a microprocessor, where the objects, including the system object is implemented in software, as previously mentioned, the invention can be implemented in numerous ways, including, but not limited to programmed microprocessors, field programmable gate arrays, custom LSIs, etc..
  • the generic concept of the invention which allows for numerous implementations is seen schematically in Figure 12, where the object oriented processor array 10 has pins 1000a, a bit and format processing ring 1000b, an elastic memory ring 1000c, and a math and logic processing core lOOOd.
  • the pins 1000a are the actual physical link of the object oriented processor array 10 to the real world surrounding it.
  • bit and format processing ring 1000b, the elastic memory 1000c, and the math and logic processing lOOOd implement the objects, with most objects utilizing bit and format processing, memory, and math and logic processing, and with some objects utilizing only bit and format processing with memory, or math and logic processing with memory.
  • the bit and format processing ring 1000b performs the functions of reducing the highly disparate ensemble of real word interfaces to a regular structure, where the "regular structure" is determined by whatever best fits the needs of the math and logic processing core lOOOd, and high speed, low level control of external devices.
  • Typical functions performed in the bit and format processing ring 1000b include serial I/O, parallel I/O, specific low level control of analog to digital converters, and other processing of signals with high analog content.
  • the implementation of the bit and format processing layer is highly variable. For example, standard microcontroller ports can be mapped into the function, although significant computing resources would be expended.
  • bit processing portion of the bit and format processing ring can conceptually be viewed as a system which treats the input pins as a vector which is sampled at a high frequency. The act of sampling permits the bit processing to be thought of as a procedural algorithm which contains actions that enables the detection of changes of state to be determined very efficiently.
  • format processing portion of the bit and format processing can be conceptually viewed as as a procedural algorithm for reformatting of data for delivery to the relevant memory construct 1000c.
  • the elastic memory 1000c can be implemented in a broad range of manners.
  • the implementation can include one or more of buffers, FIFOs, LIFOs or stacks, circular buffers, mail boxes, RAM, ROM, etc.
  • the elastic memory ring 1000c can include logic for the computation of pointer locations, etc. Alternatively, this logic can be incorporated into the functionality of the bit and format processing ring 1000b on one side, and the math and logic processing core lOOOd on the other side. Where code is stored for instantiation of objects, the elastic memory 1000b includes the code.
  • the math and logical processor core lOOOd can be one or more standard processor cores (e.g. Intel® x86, Motorola® 68xxx, Intel® 8051, PowerPCTM, or Analog Devices 21xx DSP) which process and transform I/O data into whatever the external device or application requires.
  • the math and logic processing portion includes the functional layers of the various processor objects.
  • the design of the math and logical processor core lOOOd is highly application dependent.
  • instantiated objects of the object oriented processor array will utilize one or more pins 1000a, the bit and format processing ring 1000b, the elastic memory 1000c, and the math and logic processing core lOOOd.
  • the speech recognition and speech message objects described above with reference to Figure 9 will typically utilize each of the rings and two pins each.
  • certain objects which are internal objects e.g. an internal database object
  • other objects such as the encoder object which accepts input from a rotary encoder, buffers the input, and sends the input to the script server, do not require math and logic processing. These objects require only the pins 1000a, the bit and format processing ring 1000b, and the elastic memory 1000c.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Stored Programmes (AREA)

Abstract

An object oriented processor array includes a library (20) of functional objects which are instantiated by commands through a system object and which communicate via a high level language. The object oriented processor array may be embodied in hardware, software, or a combination of hardware and software. Each functional object may include a discrete hardware processor or may be embodied as a virtual processor within the operations of a single processor. In one embodiment, the object oriented processor array is formed on a single chip or on a single processor chip and an associated memory chip, pins may be assigned to each object via a high level command language. Methods and apparatus for allocating memory (38) to instantiated objects are disclosed with instantiated objects communicating directly with a script server which is programmed to react to data events generated by instantiated objects.

Description

OBJECT ORIENTED PROCESSOR ARRAYS
BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to object oriented processors and processor systems. More particularly, the invention relates to an object oriented processor or processor system which utilizes a library of selectable processor objects in order to implement an array of processor objects. Although not limited thereto, the processors or processor system is preferably arranged such that the processor objects are self-instantiated in virtually any combination, and the processors or processor system preferably utilizes an event-reaction communication protocol through which processor objects communicate, and which is controlled by a high level scripting language.
2. State of the Art
Modern computers permit seemingly simultaneous execution of many operations by interrupting the microprocessor periodically to execute several software threads in turn. For example, as a user types on a keyboard, the input from this peripheral to the microprocessor is seemingly simultaneously displayed by the microprocessor on a video display peripheral. In reality, the microprocessor is interrupted periodically .from displaying output on the video display in order to obtain input from the keyboard. It is only, because the microprocessor operates at a very high speed that there is an illusion of simultaneity. In a more complex processing system, there may be many threads vying for microprocessor attention at any time. For example, in a desktop multimedia computer, several peripheral devices must be controlled by the microprocessor in a seemingly simultaneous manner in order to produce the proper results and different operations such as displaying video and playing audio must be handled by separate threads. The programming environment in a system having so many threads is incredibly complex. The system software must be written to schedule microprocessor attention to each thread, assign priority to each thread and allow peripherals to interrupt the microprocessor at appropriate times. The system software must then schedule tasks for the microprocessor in response to the interrupts from various peripherals.
In addition to scheduling problems, software in a multi-tasking (multi-threaded) system is difficult to debug. Single stepping techniques cannot be used, since different threads may depend on the results of other threads, and only a single thread can be operational during single stepping. The handling of interrupts by the microprocessor is determined in part by the bus protocol and in part by the design of the microprocessor itself. Typically, the bus is designed to work with a particular microprocessor or group of microprocessors; and peripheral devices are designed to work with a particular bus. Moreover, each microprocessor-bus system handles interrupts in a different way. This makes it difficult, if not impossible, to adapt program code used on one microprocessor-bus system for use on another.
In order to relieve the host processor from performing every task, multiprocessor systems have been proposed. Some multiprocessor systems are successful in dividing tasks among processors when the tasks are well defined. For example, it is not uncommon to divide tasks between a data processor and a signal processor in systems which deal with signals and data in real time. It is more difficult to divide data processing tasks among several data processors. The operating system must decide which tasks will be performed by which processor and must schedule tasks so that processors do not remain idle while waiting for new tasks or while waiting for other processors to complete tasks so as to provide needed results. Consequently, there has been very little success in developing a general purpose multiprocessor system and there is no standard programming language for programming a multiprocessor system.
U.S. Patent Number 5,095,522 to Fujita et al. discloses an object-oriented parallel processing system which utilizes "concept objects" and "instance objects". The system utilizes a host processor and a plurality of general purpose processors which are programmed by the host processor. The host user must program (generate concept and instance objects) for each processor before parallel processing can begin. Fujita et al. considers this aspect of their system to be a feature which allows dynamic changes in the functionality of each of the processors. However, this aspect of their system greatly complicates the host processor software.
Similarly, U.S. Patent Number 5,165,018 to Simor describes a system in which "nodes" are provided with generic configuration rules and are configured at runtime via resource definition messages from the control node. Simor considers this aspect of his system to be an advantage which, among other things, "isolates the hardware from the software" and "allows programs to be written as if they were going to be executed on a single processor." In addition, Simor's system permits programs to be "distributed across multiple processors without having been explicitly designed for that purpose."
Both Fujita et al. and Simor utilize general purpose processors and attempt to isolate the hardware from the software, freeing the programmer to write code as if it were being executed on a single processor. However, as mentioned above, writing multithreaded code for a single microprocessor is a daunting task. Neither Fujita et al. nor Simor propose any solution to this problem.
3. Related Inventions
Related application Serial Number 08/525,948 approaches the problem of distributed processing in a manner which is completely different from that of either Fujita et al. or Simor. The system disclosed in the '948 application utilizes processors which have been pre-programmed with functionality for a specific purpose and thereby integrates hardware with software. The developer chooses specific hardware (object oriented processors) in a manner similar to choosing specific software objects. This approach requires that the developer be very aware of the hardware used in the system, but frees the developer from writing much of the code used to implement the system. Accordingly, the developer need only write a minimal amount of relatively high level code to link the pre-programmed object oriented processors which contain statically stored code for performing specific tasks. This approach is based on the belief that writing and de-bugging code is more time consuming and more expensive than linking together processors which contain pre-written, bug- free code. This approach enables rapid system development, relieves the host processor of many scheduling tasks, simplifies de-bugging, enables cross-platform support, allows software emulation of hardware devices, as well as providing other advantages.
According to the '948 application, object oriented processors communicate with each other and/or with the host processor via the exchange of high level messages. This earliest implementation of the communication protocol required that the host poll at least some of the object oriented processors (i.e. those responsible for processing input data) to determine the availability of data. This was eventually found to detract from the goal of simple coding as the host code had to be written in a manner that would scan all possible input sources on a frequent basis. It was eventually decided that this polling created an undesirable overhead and coding complication. Since many of the originally developed object oriented processors operated in real time, the polling scan rate could be high and thus the overhead could be substantial. In addition, the early communication protocol did not provide information about the source of the data. This was instead derived by specific information requests by the host. Thus, several message exchanges might have been required before both data and source were determined.
Related application Serial Number 08/683,625 discloses a distributed processing system one or more object oriented processors are embodied as a collection of components on a single ASIC chip. This related application includes an enhanced communication language where the host need not poll the object oriented processors and where messages from one processor to another include source and destination addresses. This communication protocol can be said to be "event driven". During subsequent development of this event driven communication architecture, it was determined that programming the event driven model was somewhat complex and that communication bandwidth was not readily conserved.
In both of the related applications, each object oriented processor has a functionality which defines its physical connectability. More specifically, as embodied on a single chip, each object oriented processor (or collection of object oriented processors) presents a number of pins for coupling the processor to other devices. According to previously disclosed embodiments of the object oriented processors, the functionality of each pin is substantially fixed at the time the object oriented processor is manufactured. For example, as disclosed in related application Serial Number 08/525,948, a user interface controller utilizes thirty-seven pins, most of which have a set functionality. Several of the pins have alternate functionality. For example, pins A0 through A7 are an aux port. However, pins Al and A2 can be used as LCD enable pins and pins A3-A7 can be used as LED enable pins. Nevertheless, for the most part, the functional resources of the object oriented processors are pre-defined with respect to certain pins and cannot be substantially changed by the developer/user.
SUMMARY OF THE INVENTION
As used herein, the term "object oriented processor array" means a collection of object oriented processors where each object oriented processor incorporates a separate hardware processor, or a collection of object oriented processors where each object oriented processor is embodied as a virtual processor sharing the same hardware processor, or any combination of discrete hardware processors and virtual processors.
It is therefore an object of the invention to provide an object oriented processor array with enhanced post-manufacture configurability.
It is also an object of the invention to provide an object oriented processor array on a chip having a number of pins where the functionality of the pins is configurable by a developer/user.
It is another object of the invention to provide an object oriented processor array which contains a library of functionality which may be selected in virtually any combination. It is a further object of the invention to provide an object oriented processor array which contains a library of functionality which may be assigned to pins via software commands.
It is an additional object of the invention to provide an object oriented processor array which utilizes an enhanced communication protocol which conserves bandwidth and allows the developer/user to choose a level of coding simplicity in exchange for object latency.
Another object of the invention is to provide an object oriented processor array which utilizes memory in an efficient manner.
Overview
In accord with these objects which will be discussed in detail below, an object oriented processor array of the present invention includes a readable memory containing a library of configurable (programmable) functions (also referred to as objects) and a writable memory in which objects are instantiated and configured. More specifically, the object oriented processor array includes a system functionality (system object) which is automatically instantiated in writable memory at power-up, which calls other objects to be instantiated in writable memory in response to commands from a host processor or a boot ROM, and which maintains an active task hst and other information about instantiated objects. The object oriented processor array according to the invention further includes a communications interface, an input message processor, and an output message processor. The communications interface allows the object oriented processor array to communicate with other object oriented processor arrays and/or with a host processor or script server. The output message processor preferably includes an output flow manager for handling messages from processor objects in the array and a central output registry for queuing messages. According to a presently preferred embodiment, the object oriented processor array is embodied as a virtual machine which is formed from software which runs on a microprocessor. Therefore, the software which embodies the object oriented processor array is provided with a timing kernel which simulates parallelism and a memory manager which allocates memory to objects when they are instantiated.
Self-Instantiation
According to a presently preferred embodiment of the invention, the library of functions is configured as a library of objects stored in ROM. Each object includes a parser layer, a functional layer (which preferably includes a runtime layer and a background layer), a time layer, and an instantiation layer. The system object is also stored in ROM and is automatically instantiated in RAM when the processor array is powered on, and in a preferred embodiment of the invention, reserves RAM for an active task list table (function pointers to instantiated objects), an active task list name table (the names of the instantiated objects), and an active task hst data space (pointers to the allocated memory blocks for each instantiated object). The system object is similar to the other objects but handles global methods and functions which are common to all objects and essentially consists of a parser layer only. The primary function of the system object is to call on objects to instantiate themselves.
In response to a high level command from a host processor (or a boot ROM), the system object calls the instantiation layer of an object in the object library and commands the object to instantiate itself in RAM. The instantiation layer of the object calls the memory manager and requests an allocation of RAM. The memory manager returns a pointer (to a starting address in RAM) to the object. According to one embodiment, the object returns the pointer to the system object after performing any necessary initializations to complete instantiation. After the object informs the system object that instantiation was successful, the system object stores a pointer in the active task list table to the portion of the ROM in which the object resides. Each pointer in the active task list is associated with an index number. The system object also stores the name of the instance of the object in the active task list name table which associates the name with the same index number, and stores the pointer to the allocated memory block in the active task list data space which is also associated with the same index number. The system object then recomputes the scheduling of objects in the active task list. According to another embodiment, each instantiated object stores the pointer to its allocated RAM in a reserved area of RAM.
The instantiated object arranges its allocated RAM in several parts. The first part is the output message header which includes a pointer to the output buffer of the object instantiation, the message length, the active task list index, the flow priority of the object instantiation, the message type, and the source ID. This first part is common in all objects, i.e. every instantiated object has this part in its allocated RAM. The second part is private data used by the cell in the performance of its functionality. This second part may be different for each instantiated object and some objects may arrange additional parts of allocated RAM. Once an object has been thus instantiated, physical pins can be assigned to the instantiation of an object by sending a high level command from a host processor (or boot ROM) to the instantiated object.
The input message processor checks the syntax of all incoming messages, buffers the message, examines the command and looks at the active task list name table to determine the index number for the instantiated object to which the command is directed, and passes the message and the index number to the parser layer of the object The object parser layer interprets die pin assignment message and scores die pin assignment in its private data area of RAM for die named instantiation of the object
According to other preferred aspects of the invention, objects may be instantiated several times so long as enough hardware resources (pins and RAM) are available to support another instantiation. The system object keeps track of all object instantiations by placing entries in the active task list table, the active task list name table, and the active task list data space table. In addition, the memory manager maintains a pointer to the memory heap which is utilized and generates an error message if requested to assign more RAM than is available. After pins have been assigned to an instantiation of an object, a message flow priority can be assigned. According to a presently preferred embodiment, a flow priority of 0-7 may be assigned where 0 represents polled priority and 1 -7 indicate increasing levels of importance. When the priority level is greater than zero, output messages from the object will be generated autonomously. The flow priority is stored by the instantiation of an object in its output message header pan of RAM. During initialization, the system object initiates a global variable or counter indicating die number of active tasks **). Each time an object is instantiated, this variable is incremented. When die variable is 0, the system object returns control to the timing kernel which scans the active task list Each ti e an object is instantiated, all active tasks are stopped and all instantiated objects arc called to their tinting layer and the tasks are scheduled. The system object assigns an offset to each object instantiation which the timing layer stores in private data. The object returns a worst case time to the system object and the worst case time Is used to calculate the offset for the next active task. The time between the worst case and the actual time is advantageously used by die system object for system (background) functions; Le* system functions are not otherwise scheduled and therefore do not require overhead.
Baric Message HaπdUηg
During operation, a message to a particular instantiation of an object is parsed by the input message processor which scans the active task list name table to find the index to the active task list the pointer to the obgect instantiation addressed. The object receives the message ftom the input message processor and the index. The object uses the index and the active ask list data space table to find which instantiation of itself is being addressed. Messages ftom an instantiation of an object are placed in the output message header part of its RAM and a request for registration is sent to the central output registry. The output regisvy maintains a queue of pointers to messages in output message header parts. The queue is scanned by one or more output flow managers which form output messages from the information held in the output message header parts.
Central Script Server
According to one embodiment of the invention, one or more object oriented processor arrays are coupled to a central script server or host processor. Messages which result from data events in any of the processor objects are sent to the central script server for processing. The central script server parses the messages it receives from processor objects and executes a script which has been written for this type of data event. The script usually results in the sending of a message to another processor object on the same array or on a different array than the array containing the processor object having the data event. According to one aspect of the invention the flow of messages is based on an event-reaction architecture.
Event-Reaction Model
The event-reaction architecture for message flow is a flexible method of assigning priority to multiple bus users which conserves bandwidth and simplifies host processor programming. According to the event-reaction model, when a processor object has a message to send, it generates a data event which is registered with the target recipient of the message (usually the script server). The target reacts to the event by allowing a variable amount of I O exchange between the processor object and the target prior to an acknowledgement of the data event. According to one embodiment, until the data event is acknowledged, no other data event may be sent to the target. According to another embodiment, a fixed number of data events may be pending simultaneously. In one embodiment of a multi-node system, each node (each object oriented processor array) is aware of network traffic and data events related to a particular target (script server) receiver are registered with all nodes which have that receiver as a target as well as with the target. The number of data events which may be simultaneously pending at any time is determined by the target and known to each of the nodes. The target arbitrates the flow of messages based on the registration of data events. Arbitration may be on a FIFO basis or the targeted receiver may grant priority to later generated data events which have a higher flow priority. As mentioned above, the output message buffers of the instantiated objects provide for a message flow priority. The event reaction model permits the script server code to be linear with the number of threads in the script server code depending on the number of pending data events permitted. According to another embodiment, a central hub (script server) arbitrates the exchange of messages among a number of object oriented processor arrays coupled to the hub by individual links rather than a common bus. The hub communicates with each array according to the message flow priorities of the cells in the array. The number of threads in the code on the hub depends on the sum of the number of pending data events permitted in each object oriented processor array.
Distributed Script Servers
According to another embodiment, one or more object oriented processor arrays are provided with local, internal script servers. In this "distributed scripting" embodiment, communications are controlled according to the event-reaction protocol by providing the output message processor with additional functionality to register and queue data events and an event script look up table to determine which events relate to internal messages. The input message processor is provided with additional functionality to queue and buffer messages destined for objects in the array.
High Level Language
According to other preferred aspects of the invention, a high level language is provided for communication between object oriented processors and a host processor during set-up and during operation. The high level language according to the invention includes support for the event- reaction protocol, an efficient addressing scheme, better use of bandwidth and simplified host parsing. According to a presently preferred embodiment, the high level language messages are exchanged in packets of variable length with a well defined header. The header includes addressing information, an indication of the type of message which follows, an indication of the method or data type, and an indication of the length of the remaining data in the packet. The high level language is self-aligning, simple, and robust.
As mentioned above, the object oriented processor arrays according to the invention are embodied in many alternative constructions including running software on a microprocessor, field programmable gate arrays, multiple microprocessors, and other hardware devices.
Additional objects and advantages of the invention will become apparent to those skilled in the art upon reference to the detailed description taken in conjunction with the provided figures. BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a schematic block diagram of the major hardware components of an object oriented processor array according to the invention;
Figure 2 is a schematic block diagram of the major functional components of an object oriented processor array according to the invention;
Figure 3 is a schematic memory map of the writable memory area in the object oriented processor array of Figures 1 and 2;
Figure 4 is a flow chart illustrating the basic steps in the initialization, setup, and operation of the object oriented processor array of Figures 1 and 2;
Figure 4a is a flow chart illustrating the basic functions and operation of the system object;
Figure 4b is a flow chart illustrating the basic functions and operation of the memory manager,
Figure 4c is a flow chart illustrating the basic functions and operation of the timing kernel, active objects, and system object with regard to scheduling;
Figure 5 is a schematic memory map of the writable memory area in an alternate embodiment of an object oriented processor array;
Figure 6 is a schematic flow chart illustrating the steps in the setup programming of the alternate embodiment;
Figure 7 is a schematic flow chart illustrating the operational mode of the alternate embodiment;
Figure 8 is a schematic block diagram of an object oriented processor array according to the invention coupled to a host processor and a power supply;
Figure 9 is a schematic block diagram of an implementation of an object oriented processor array to control a "smart telephone"; Figure 10 is a schematic block diagram of the major functional components of an object oriented processor array according to a second embodiment of the invention;
Figure 11 is a schematic block diagram of an object oriented processor array according to the invention utilizing multiple microprocessors; and
Figure 12 is a schematic diagram generally illustrating the implementation of the object oriented processor array onto any hardware device or devices.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
A Basic Hardware-Software Example
Referring now to Figure 1, an object oriented processor array 10 according to a presently preferred embodiment of the invention includes a readable memory 12, a writable memory 14, one or more programmable processors 16 coupled to the memory 12 and 14, and a real world interface such as a number of pins 18 which are coupled to the processor(s) 16. As shown and described in further detail below, one embodiment of the invention resides on a single chip which includes a single general purpose microprocessor, RAM and ROM and which has a number of pins. Those skilled in the art will appreciate, however, that the functional aspects of the invention may be embodied using many different types of hardware and/or software.
Turning now to Figure 2, and with reference to Figure 1, the readable memory 12 contains a library 20 of configurable (programmable) functions (also referred to as objects) which are instantiated and configured in the writable memory 14 as described in detail below. More specifically, the object oriented processor array 10 includes a system object 22 which is automatically instantiated in writable memory at power-up, which calls other objects from the library 20 to be instantiated in writable memory 14 in response to commands from a host processor or a boot ROM as described in more detail below. Once an object has been instantiated, it appears as an active object, e.g. 23a, 23b, 23c, in the processor array 10. The object oriented processor array 10 further includes a communications interface 24, an input message processor 26, and an output message processor 28. The communications interface 24 allows the array of active objects 23a-23c to communicate with other object oriented processor arrays and/or with a host processor or script server via a communications link or bus 25 (which may be in the form of a physical bus, a multi-ported memory or even a radio link). The communications interface also allows the system object 22 to receive commands from a host or boot ROM. The input message processor 26 is responsible for routing and basic syntax parsing of incoming messages. Once the message is received and deemed syntactically correct, it is routed to the parser layer of the addressed object as discussed below. The output message processor 28 preferably includes an output flow manager 32 for handling messages from active objects in the array 10 to processors external of the array 10, and a central output registry 34 for queuing messages. All input to the output message processor 28 is through the central output registry 34. As described in more detail below, upon the occurrence of an event within an object, the object calls the central registry 34 and provides a handle to a standard structure which is entered into the output queue. The output queue is scanned by the flow managers which look for information on the output queue and the priority the object has been assigned, if any. Once a flow manager determines which object has subsequent use of the port, it condsructs a message using information in the standard structure which determines the message type (e.g. data event, command ack, etc.), the name of the object that originated the message, the type or source of the data, and any data associated with the message which is derived by referencing a pointer. The newly composed message is then sent to the output port and transmitted.
As mentioned above, according to one embodiment, the object oriented processor array 10 is embodied as a virtual machine which is formed from software which runs on a microprocessor. Therefore, the software which embodies the object oriented processor array 10 is provided with a timing kernel 36 which simulates parallelism and a memory manager 38 which allocates memory to objects when they are instantiated. It will also be appreciated that when the object oriented processor array 10 is embodied as a virtual machine, the interconnections among the elements shown in Figure 2 are not physical connections, but rather indicative of the functional relationships among the elements.
Basic Features of Processor Objects
Referring now to Figures 2 and 3, each object, e.g. 23c, includes a parser layer 40, a functional layer 42, a time layer 44, and an instantiation layer 46. The parser layer contains the intelligence to interpret the vocabulary pertinent to the particular object and to perform tasks which can be performed immediately. As mentioned above, each object has a certain predefined functionality which is configurable. The vocabulary for each object, therefore, will be governed somewhat by the functionality which the object contains and will also include some general purpose vocabulary for communication. Examples of how the vocabulary of each object may be different is shown in related application Serial Number 08/525,948. The parser layer is the means by which an instantiated object is initialized and configured, and the means by which the object receives messages. The functional layer 42 contains all of the intelligence needed to carry out the predefined functionality of the object and is preferably divided into a runtime layer and a background layer. The runtime layer contains the functionality which needs to be executed on a continual basis and the background layer contains the functionality for relatively low priority tasks. For example, if the object is coupled to an input device, scanning the input device would be part of the runtime layer. The functions performed in the background layer usually take a long time to execute such as low speed communications dialog with a device socket. The time layer 44 participates in the scheduling of the runtime layer by providing information to the system object 22 about the dynamic performance and behavior of the particular object as described more fully below. The instantiation layer 46 performs the tasks needed to instantiate the object when called by the system object as described more fully below.
As mentioned above, all of the objects, e.g. 23a-23c, are preferably stored in ROM. The system object 22 is also preferably stored in ROM and is automatically instantiated when the processor array 10 is powered on. The instantiation of the system object 22 includes reserving a portion 14a of RAM 14 for itself and reserving portions of RAM 14 for an active task list table 14b (function pointers to instantiated objects), an active task list name table 14c (the names of the instantiated objects), and an active task list data space 14d (pointers to the allocated memory blocks for each instantiated object). The system object 22 is similar to other objects but handles global methods and functions which are common to all objects (e.g., turning exception on/off, returning shell status, etc. - shell status includes, e.g., the number of pending events, the number of pending acknowledgements, the number of instantiated objects, the number of communication errors, etc.) and essentially consists of a parser layer only. The primary function of the system object is calling other objects to be instantiated.
Initialization. Configuration, and Operation
Turning now to Figure 4, and with reference to Figures 2 and 3, the initialization, configuration and operation of the object oriented processor array begins when power is applied to the array as shown at 200 in Figure 4. Upon power on, the system object is automatically instantiated as shown at 202 in Figure 4. The system object initiates a global variable or counter indicating that the number of active tasks =0. Each time an object is instantiated, this variable is incremented. When the variable is >0 as shown at the decision point 204 in Figure 4, the timing kernel 36 scans the active task list table 14b as shown at 206 in Figure 4. Initially, however, the only operation which occurs after the system object is instantiated is the receipt of a command message to instantiate an object. Whenever a message is received as shown at 208 in Figure 4, the input message parser 26 checks the syntax of the message and determines at 210 whether the message is for the system object. Although not shown in Figure 4, if the syntax of the message is incorrect, the input message processor 26 will prepare an error message which is queued in the output registry 34. If it is determined at 210 that the incoming message is for the system object (i.e. a command to instantiate an object), the input parser passes the command to the system object which then checks for hardware resource availability at 212 and determines whether sufficient pins are available to instantiate the object called for in the command. More particularly, the system object interrogates the instantiation layer of the object to determine what resources are needed to instantiate the object and then determines whether sufficient resources (e.g. pins and memory). If the system object determines at 212 that (because of other object instantiations which preceded this one) there are not enough pins to instantiate the object, it buffers an error message and sends a pointer to the output registry at 214 to return an error message to the host. Control is then returned to the timing kernel which scans the active task list at 206.
If it is determined at 212 that sufficient resources are available to instantiate the object, the system object calls at 216 the instantiation layer of the object in the object library and commands the object to instantiate itself in RAM 14. The instantiation layer of the object calls (at 218 in Figure 4) the memory manager 38 and requests an allocation (e.g. 14e) of RAM 14. The memory manager checks for the availability of RAM at 220 and if insufficient memory is available, sends an error message at 214. If enough RAM is available, the memory manager 38 returns a pointer at 222 (to a starting address in RAM) to the instantiation layer which receives the pointer and arranges its memory at 224. The memory manager also increments at 222 a heap pointer which is used by the memory manager to determine at 220 whether sufficient RAM is available for other instantiations. After the instantiation layer 46 successfully completes instantiation, it informs the system object 22 that instantiation was successful and sends the pointer to the system object at 226. When an object is instantiated, the instantiation layer arranges (at 224) its allocated RAM into organized parts. The first part is the output message header which includes a pointer to the output buffer of the object instantiation, the message length, the active task list index, the flow priority of the object instantiation, the message type, and the source ID. This first part is common to all objects, i.e. all instantiated objects arrange part of their allocated RAM in this manner. One or more other parts of RAM are arranged for private data used by the instantiated object in the performance of its functionality. It should be noted that the message flow priority is a variable which is selected by the developer and is assigned to the instantiation of the object during initialization of the array. The message flow priority is described in more detail below with respect to the "event-reaction" communications protocol. At 228, the system object 22 stores a pointer in the active task list table 14b which points to the portion of the ROM where the object resides. Each pointer in the active task list table is associated with an index number and the index number for the pointer is provided by the system object to the instantiation layer which stores the index number in the portion of the RAM it has configured for storage of static variables. It should be appreciated that the above described layers of the object are not copied into the RAM allocated to an instantiated object. The actual functionality of the object remains in ROM and is located by the pointer in the active task list table 14b. The RAM allocated to an instantiation of the object is used by the functionality of the object. It should also be appreciated that a particular object in the object library may be instantiated several times. In practice, each object has a functional name (which refers to the object in ROM) and an instantiated name (which refers to the instantiation of the object). The instantiated name is given as part of the high level command to the system object at the beginning of instantiation. The system object 22 also stores the instantiated name of the object in the active task list name table 14c which associates the name with the same index number as the pointer to ROM, and stores the pointer to the allocated block of RAM in the active task list data space 14d which is also associated with the same index number.
The system object 22 then recomputes the scheduling of objects in the active task list table 14b. More particularly, each time an object instantiation is completed at 228 in Figure 4, all active tasks are stopped as shown in Figure 4 at 230 and all instantiated objects are called to their time layer. Each instantiated object returns a worst case time to the system object at 232 and the worst case time is used to calculate an offset for each active task (each instantiated object includes at least one active task). The system object 22 assigns the offset to each object instantiation which the time layer stores in private data at 234. The time between the worst case and the actual time is advantageously used by the system object for system (background) functions; i.e. system functions are not otherwise scheduled and therefore do not require overhead. After rescheduling in this manner is completed, the system object returns control to the timing kernel which resumes scanning the active task list at 206.
After instantiation, pins may be assigned to the instantiated object (if necessary) by sending command messages directly to the instantiated object. It will be appreciated that the functionality of a particular object may include performing certain input or output tasks which require a physical connection to an external device such as a keyboard or a display. However, some objects may have functionality which only requires communication with other objects and/or with a script server (as described below) in which case pins do not need to be assigned to the instantiated object. In order to assign pins to an instantiated object, messages to the instantiated object are addressed to the instantiated name of the object. For example, as shown in Figure 4, when an incoming message is detected at 208, the input message processor 26 checks the syntax of the message, buffers the message, and examines the message to determine at 210 if the message is for the system object. If the message is not for the system object, it will be addressed to a named instantiation of an object. The input message processor looks at 236 for the named instantiation in the active task list name table 14c to determine the index number of the instantiated object to which the command is directed. Although not shown in Figure 4, if the name cannot be found in the active task list name table, an error message will be prepared and queued with the output registry. The input message processor also scans at 236 the active task list table 14b using the index number to find the pointer to the portion of ROM which contains the layers of the object. The input message processor then forwards at 238 the message and the index number to the parser layer of the object. The parser layer of the object uses the index number to determine which instantiation of the object is being addressed and to find the pointer to the appropriate portion of RAM. The parser layer also interprets the message and determines at 240 whether the message is a configuration message, e.g., to assign pins or to set a flow priority. If it is determined at 240 that the message is a configuration message, the configuration data is stored at 242 in the appropriate portion of RAM.
If the parser layer of the object determines at 240 that the message is not a configuration message, the message is processed at 244 by the functional layer of the object and control returns to the timing kernel to scan the active task list. The functional layer of instantiated objects also may generate messages which need to be sent to another object or script server outside the array 10. Messages from an instantiation of a cell are placed in the output message header part of its RAM and a request for registration is sent to the central output registry 34. The output registry 34 maintains a queue of pointers to messages in output message header parts. The queue is scanned by one or more output flow managers. As shown in Figure 4, when an outgoing message is determined at 246 to be in the queue, the output flow manager reads at 248 the highest priority pointer in the queue. The pointer points to the output message header part of RAM used by the instantiation of the object which prepared the message. The output flow manager uses the data there to prepare and send messages at 250 in Figure 4, or to send a "data event" as described in more detail below with reference to the "event-reaction" messaging protocol.
As mentioned above, objects may be instantiated several times so long as enough hardware resources (pins and RAM) are available to support another instantiation. The system object 22 keeps track of all object instantiations by placing entries in the active task list table 14b, the active task list name table 14c, and the active task list data space table 14d. In addition, the memory manager 38 maintains a pointer to the memory heap which is utilized and generates an error message if requested to assign more RAM than is available. After any necessary pins have been assigned to an instantiation of an object, a message flow priority can be assigned. Alternatively, flow priority may be assigned before assigning pins. According to a presently preferred embodiment, a flow priority of 0-7 may be assigned where 0 represents polled priority and 1-7 indicate increasing levels of importance. When the priority level is greater than zero, output messages from the instantiated object will be generated autonomously. The flow priority is stored by the instantiation of an object in its output message header part of RAM.
The System Object
As mentioned above, the system object instantiates itself automatically when power is applied to the object oriented processor array. The operations of the system object are shown in greater detail in Figure 4a. Referring now to Figure 4a, when power is applied to the object oriented processor array, the system object seizes a pre-assigned portion of RAM for its use, shown at 1200 in Figure 4a. After performing low level diagnostics of the object oriented processor array at 1202, the system object starts the timing kernel at 1204. At this point, the host or the boot ROM may send global configurations to the system object, shown at 1206, such as "enable exception reporting", etc. The system object then waits at 1208 for a command from the host to instantiate an object. Upon receiving a command to instantiate an object, the system object examines the hardware resources (e.g. memory available) in the object oriented processor array to determine at 1210 whether there are sufficient resources available to instantiate this particular object. It will be understood that the system object need not be provided with the knowledge of the hardware requirements of each of the objects in the object library. If insufficient resources are available, the system object sends an error message (if exceptions are enabled) at 1212 and returns at 1208 to await a command to instantiate an object. It will be understood that in a fully developed application, e.g. where commands to the system come from a programmed ROM, there will be no errors and that the reporting of errors shown in Figure 4a is used during the development of an application when the object oriented processor array is coupled to a host processor. If it is determined at 1210 that sufficient resources are available, the system object calls the instantiation layer of the specified object at 1214. The instantiation layer performs the tasks described above with reference to Figure 4 at reference numerals 218 through 226 and returns its memory pointer to the system object which is shown in Figure 4a at 1216 where the system object receives the pointer. The system object then writes to the active task lists at 1218 as described above with reference to Figure 4 at reference numeral 228. After writing the to active task lists, the system object takes control from the timing kernal at 1220 and calls the timing layers of all instantiated objects at 1222. As explained in further detail below with reference to Figure 4c, the timing layers report their worst case times at 1224 and the system object calculates an offset value for each instantiated object at 1226. These values are given to the objects as described above with reference to Figure 4 at reference numeral 234. The system object then turns control back over to the timing kernel at 1228 and returns to 1208 to await any further commands to instantiate objects.
The Memory Manager
As mentioned above, the memory manager keeps track of available RAM during the instantiation processes. More specifically, as shown in Figure 4b, after the system object is self- instantiated, the memory manager, at 2200, reads the total memory amount of the object oriented processor array and sets the heap pointer to the next available protion of RAM beyond the portion already occupied by the system object. The memory manager then waits at 2202 for a request to assign RAM. When the memory manager receives a request to assign RAM from the instantiation layer of an object, it examines the request at 2204 to determine the amount of RAM requested. The memory manager finds the amount of RAM currendy available at 2206 by subtracting the heap pointer from the total memory amount. The memory manager decides at 2208 if the request for RAM can be fulfilled by comparing the amount requested to the amount available. If there is insufficient RAM, the memory manager sends an error message at 2210 to the instantiation layer of the object. It will be understood that the error reporting is only used when an application is being developed. If there is sufficient RAM, the memory manager assigns RAM to the requesting object at 2212 by giving it the current location of the heap pointer. The memory manager then adjusts the location of the heap pointer by adding the requested amount of RAM to the pointer at 2214, thereby moving the pointer to the start of the portion of RAM beyond that now occupied by the instantiated object. The memory manager then returns to 2202 to await another request for RAM.
Timing and Scheduling
According to one aspect of the invention, the system object is not allocated any specific operation time by the timing kernel, nor does the system object allocate any time for its own use when performing scheduling tasks as described above with reference to Figure 4 at 232 and 234. More particularly, as shown in Figure 4c the timing kernal initializes at 3200 (when started by the system object as described above with reference to Figure 4a at reference numeral 1204). If a new object has been instantiated at 3202, the system object takes control from the kernel at 3203 as described above with reference to Figure 4a at reference numeral 1220. The system object collects the worst case times from all instantiated objects at 3204 as described above with reference to Figure 4a at reference numeral 1222 and 1224. The system object then totals all of the worst case times and also adds a predefined time allocated for shell operations (i.e communications and message processing) at 3206. The times are given in terms of the system clock which will depend on the clock frequency. The System object then pro-rates timer interrupts for each of the active objects and the shell at 3208. At 3210, the system object allocates the system clock time in the form of offsets which will be used by the timing kernel to allocate time to each active object and the shell. Each active object stores its assigned offset at 3212 in a portion of its allocated RAM. The system then returns control to the timing kernel at 3214. The timing kernel scans the active task list at 3216. For each object in the active task list, the timing kernel will allow the processor to devote a number of clock cycles to the task of that object at the appropriate time which is based on the offset for that object as shown at 3218. However, the object may not need all of the clock cycles assigned to it. For example, if the object is an input processor coupled to a keyboard or an encoder and no input activity is taking place, the object will be idle. If an object in the active task list does not need any time, the time is given to the system object at 3220. When the system object is given time at 3220, it looks at 3202 to see if there has been a new object instantiated. If no new object has been instantiated, the system object looks at 3222 to see if there is a command to instantiate a new object. If there has been a command to instantiate a new object, the system object calls the object's instantiation layer at 3224 as described above. If there is no command to instantiate at 3222, the timing kernal continues to scan the active task list at 3216.
Alternate Embodiment Initialization
The object oriented processor array according to the invention may utilize memory and instantiate objects in a slightly different manner, according to an alternate embodiment. In particular, as shown in Figure 5, the memory 314 is arranged in a slightly different manner, i.e. there is a reserved area of memory 314e in which instantiated objects store pointers as described below. The provisioning of this reserved area of memory obviates the need for an active task list name table or an active task list data space table, and only the active task list 314b is needed. However, provisioning of this reserved area can be a waste of memory which never gets used.
Referring now to Figures 5 and 6, when the object oriented processor array is powered on at 300 in Figure 6, the system object automatically instantiates itself at 302 in Figure 6 in a portion 314a of RAM as shown in Figure 5. During auto-instantiation, the system object also reserves a portion of RAM 314b for maintaining an active task list, a list of pointers to objects in the object library. The object oriented processor array is thus in a condition to receive high level commands from the host or boot ROM. An exemplary configuration command from the host to the object oriented processor array according to this alternate embodiment takes the form (zF(ENC4)}, where z is the address of the system object, F is the command to instantiate, and ENC4 is the name of a object in the object library, i.e. a 4-wide encoder. According to this alternate embodiment, the names (addresses) of the different objects are given functionally, e.g. LCDT (text LCD controller), ENC4 (4-wide encoder), KB44 (4x4 keypad controller), etc. In response to this command, given at 304 in Figure 6, the input message processor checks the syntax of the command at 306 in and passes the command to the system object. The system object sends a call at 308 to the instantiation layer of the object "ENC4" and tells the object "ENC4" to instantiate itself. In response to the command from the system object, the instantiation layer of the object "ENC4" checks at 310 its predefined area of reserved memory 314e for prior instantiations of "ENC4" and determines at 312 whether there are sufficient hardware resources available for an(other) instantiation. According to this alternate embodiment, each object in the library is provided with a pre-coded address to a small block of RAM (a portion of 314e in Figure 5) which is thus reserved for its use in keeping track of instantiations. If it is determined at 312 that insufficient resources are available, an error message is sent which is received by the output registry at 313 and forwarded to the host which receives an error message at 316. If it is determined at 312 that sufficient resources exist for the instantiation, the instantiation layer of "ENC4" calls the memory manager and requests at 318 an allocation of RAM sufficient for its needs. The memory manager maintains a pointer to the next byte of available memory in the "heap" as well as the address of the end of the heap. In response to a request for "n-bytes", the memory manager subtracts "n-bytes" from the end of heap address and compares the result to the heap pointer to determine at 320 whether there is enough RAM available. If sufficient RAM is not available, an error message is sent at 322 to the output registry which passes the message to the host. If it is determined at 320 that RAM is available, the memory manager, at 324, assigns the pointer to the instantiation of "ENC4" and increments the heap pointer by n-bytes. The instantiation layer of "ENC4" receives the pointer at 326 and writes the pointer at 328 to its block of reserved memory in 314e. As illustrated in Figure 6, the pointer points to the start of memory block 314c. According to this alternate embodiment, the object "ENC4" allocates a portion of the RAM space assigned to it for output message headers and another portion of the RAM assigned to it for "private data". When the object "ENC4" has been instantiated, the task dispatcher in the system object stores a pointer at 330 to the object "ENC4" in the active task list. According to this alternate embodiment, the position in the active task hst is used as the instantiation name of the instantiation of the object. For example, if the active task hst has six entries (a-f), the first instantiated object will have the instantiation name "a", the second "b", the third "c", etc. Further communications with an instantiation of a object will utilize this name. After an object has been instantiated as described above, pins can be assigned to it by the host using the command language according to the invention. For example, a command of the form {aP(B)} from the host to the object oriented processor array is directed to the instantiated object having the name "a" and utilizes the command P to assign pins where the parameter B is the location of the pins assigned to "a". As shown in Figure 6, such a command is issued at 332. Upon receipt of such a message from the host, the input message processor checks the message at 334 for correct syntax and will generate an error message to the host if the syntax is incorrect. Based upon the address "a", the input processor will look at 336 for "a" in the active task hst and direct the message to object "ENC4". According to this alternate embodiment, "a" would be the first pointer in the active task list, "b" would be the second, etc. According to the example given herein, the pointer at "a" points to the object "ENC4" and the input processor will therefore forward the message at 338 to the object "ENC4". The object "ENC4" receives the addressed message and scans its reserved memory area at 340 in Figure 6 to find the pointer to the assigned workspace of the named instantiation of itself. The pin numbers are then stored at 342 by the object "ENC4" in the private data area of instantiation "a". Once the pins have been assigned to the instantiation "a" of the object "ENC4", the instantiation is operational and the pins are functioning with a default flow priority of zero.
Each time a new object is instantiated as described above, the timing of all tasks is recalculated. Thus, as shown in Figure 6, the system object stops all tasks at 344 and calls the timing layer of all instantiations of objects. The instantiations respond at 346 with a worst case time to the system object and the worst case time is used to calculate the offset for the next active task. Each instantiation of a object stores at 348 its offset in the private data area of its assigned RAM. The time between the worst case and the actual time used by each object instantiation is used by the system object for background tasks. After the timing of all the tasks is recalculated, the system object returns at 350 to scanning the active task list which is described in more detail below with reference to Figure 7. As shown in Figure 6, timing may be recalculated in response to a host command at 352 to assign a particular priority level to a particular object instantiation. The scheduling of priorities may is performed by a scheduler which may be considered a part of the system object or a separate entity.
Turning now to Figure 7, the task dispatcher of the system object continually scans the active task list starting at 400 and periodically checks at 402 whether a new object instantiation has been added. In actual practice, the scheduler sets timers for tasks based upon their priority and background tasks are completed when extra time is available. Therefore, the order of operations shown in Figure 7 is not necessarily the order in which operations will be scheduled by the scheduler. If a new object instantiation has been added, the procedure described above (344-350 in Figure 6) is performed at 404 in Figure 7 and the system object then returns to scanning active tasks. This includes monitoring the output message processor and the input message processor to determine whether messages need to be delivered. For example, at 406 it is determined whether an incoming message is pending (in the buffer of the input processor). If so, the input processor examines the active task list at 408 to determine the object for which the message is addressed and passes the message to the object. The parser layer of the object examines, at 410, its preassigned reserved memory to determine which instantiation of itself should receive the message and passes the message to the appropriate layer (functional, timing, or instantiation) of the object for processing as an active task. The system object then checks at 412 whether an outgoing message is in the queue of the of the output registry. If there are messages in the queue, the output message processor reads the highest priority pointer in the output registry at 414. The pointer in the queue points to the output message header of the object generating the message. As mentioned above, the output message headers contain pointers to output buffers in RAM as well as an indication of the type of data to be sent, and a flag to indicate whether the output message header has been queued, etc. At 416, the output message former uses the output message header and the data to create a message for output onto the network sends the message and then drops the queue flag. An example of a message format according to this alternate embodiment is
{ToArraytoobjMethod(data)FromObjfromarray} where the fields are delimited by case, To Array is the processor address of the recipient, toobj is the object address of the recipient, Method is a function, data is data, and the last from fields indicate the address of the sender. If the from addressing is blank, the host is the sender. The system object then resumes scanning the active task list at 400.
Central Script Server
Turning now to Figure 8, a presently preferred embodiment of the object oriented processor array 10 is contained on a single chip having a plurality of pins, e.g., pO through p39. Three pins, e.g. pO, pi, and p2 are preferably reserved for a communications link with a host processor or script server 50 and additional object oriented processor arrays, e.g. 10a- 10c, via a network link 25; and two pins, e.g. p38 and p39 are preferably reserved for connection to a DC power source 52. According to the presently preferred embodiment, the three pins are used to implement the communications network described in co-owned, co-pending application number 08/645,262, filed May 13, 1996, the complete disclosure of which is incorporated by reference herein. In some instances, however, it may be advantageous to provide discrete point-to-point links between the host 50 and each array 10, lOa-lOc, e.g., when the arrays are physically removed from each other by substantial distance, when the arrays are operating at substantially different speeds, or when only one array is used. In these situations, only two pins will be needed to support the link and one may use the point-to-point communication methods disclosed in co- owned, co-pending application number 08/545,881, filed October 20, 1995, the complete disclosure of which is incorporated by reference herein. It should also be noted that the script server may also be coupled to other conventional microprocessors and/or peripheral devices to create a system which includes both distributed object oriented processing with traditional microprocessor systems.
According to the invention, the host processor 50 is used to configure the object oriented processor arrays 10, lOa-lOc utilizing a high level command language and is preferably also used to communicate with the object oriented processor arrays during normal operations. According to one embodiment of the invention, the host processor acts as a central script server and all messages generated by an object oriented processor array are sent to the script server for processing. More particularly, when an instantiated object in one of the arrays 10, lOa-lOc has data to send to another object, the data is first sent to the script server 50 and a program running on the script server determines the destination for the data. The program on the script server may also manipulate the data before sending it on to another object. According to the invention, communications between object oriented processor arrays 10, 10a- 10c and the script server are managed according to an "event-reaction" model.
Event-Reaction Model
When several object oriented processor arrays are coupled to a single script server, the script server may be required to participate in many concurrent dialogs with the several object oriented processor array. Each concurrent dialog requires a separate thread in the script server. However, the event-reaction model relieves the developer from the task of writing complicated multithreaded script server code. In addition, objects in an object oriented processor array which are coupled to input devices can generate data events at a rate faster than the script server can (or desires to) process them. For example, the rotation of a rotary encoder may cause a data event every time the encoder is advanced one detent. This could result in a significant amount of redundant computation and communication. The event-reaction model solves this problem by allowing the script server to control when data is sent to it. According to the event-reaction model of the invention, when an object within an array 10 has a message to send, it generates a data event. The data event is registered with the script server 50 and with the output message processors of each array 10, lOa-lOc which is coupled to the script server 50 via the bus 51. The script server 50 reacts to the data event by allowing a variable amount of I/O exchange between the object and the script server prior to sending an acknowledgement of the data event onto the bus 25. According to one mode of operation, until the data event is acknowledged, no other data event may be sent to the script server 50 from any object on any of the arrays 10, lOa-lOc. According to another mode of operation, a fixed number of data events may be pending simultaneously. The number of data events which may be pending simultaneously is determined by the script server 50, and each of the arrays is configured when they are initialized by the script server so that their output message processors know how many pending events are allowed at any one time. The output message processor in each array keeps a counter of the pending data events. When a data event is generated, the counter is incremented, and when a data event is acknowledged, the counter is decremented. Those skilled in the art will appreciate that the number of pending data events which are allowed will determine the number of threads which need to run on the script server.
When multiple object oriented processor arrays are coupled to a central script server by a bus, each processor array is aware of network traffic, and data events related to a particular target (script server) receiver are registered with all arrays. The number of data events which may be simultaneously pending at any time is determined by the target and is known to each of the arrays. The target arbitrates the flow of messages based on the registration of data events. Arbitration may be on a FIFO basis or the targeted receiver may grant priority to later generated data events which have a higher flow priority (if the number of allowed pending data events is more than one). As mentioned above, the output message buffers of the instantiated objects provide for a message flow priority. The event reaction model permits the script server code to be linear with the number of threads in the script server code depending on the number of pending data events permitted.
When a number of object oriented processor arrays coupled to script server by individual links rather than a common bus. The script server communicates with each array according to the message flow priorities of the cells in the array. The number of threads in the code on the hub depends on the sum of the number of pending data events permitted in each object oriented processor array.
An example of the event-reaction model (where the maximum number of pending data events is one) is described as follows with reference to Table 1, below.
Figure imgf000027_0001
Table 1
As shown in Table 1, at tl an object with data to send to the script server sends a [dataEvent] to the script server and all other objects capable of sending messages to the script server now must refrain from sending data events. When the script server is ready to receive the data from the object which sent the data event at tl, it issues a {cmdEvent at t2. The object addressed responds at t3 with a JcmdAck which also contains some of the data it has to send. The script server then requests at t4 that the object send the remainder of its data. The object sends the remainder of the data at t5 and the script server acknowledges that all data was received at t5+n by sending a [dataAck]. All objects are now free to send a data event which is shown in Tablel at t5+m. The number of polled data packets between the data event at tl and the data acknowledge at t5+n is variable depending on the timing of the system, the other tasks being handled by the script server, and the amount of data the object needs to send. The system can also include timers which causes a retransmittal of the original data event if the server does not respond within a certain time and/or which clear any pending events so that the event generator can be re-enabled.
Another example of the event-reaction model (where the maximum number of pending data events is two) is described as follows with reference to Table 2, below.
Figure imgf000028_0001
Table 2
As shown in Table 2, a first data generator sends a data event at tl and a second data generator sends a data event at t2. Upon [dataEvent2, all data generators are prohibited from sending data events until a ]dataAck is generated. The server polls the first data generator for data at t3 and the first data generator acknowledges the server at t4, and sends the data. When all of the data is sent (which could require several {cmdEvents and }cmd Acks), complete receipt of the data is acknowledged at t5. All data generators are now free to send a data event. At t6 the server begins polling the second data generator. As seen in Table 2, before the second data generator responds, the first (or another) data generator sends another data event at t7. Again, with two pending events, all data generators are prohibited from sending additional data events. The second data generator sends its data at t8, the complete receipt of which is acknowledged at t9. Now with only a single data event pending in the script server, all data generators are now free to send a data event.
This example assumes that the first and second data generators had equal priority. However, if the second data generator had priority over the first, the server would have polled it first Optionally, the server may implement a time division multiplexing scheme to allow both data generators to send data "simultaneously". It will be appreciated that data generators which are prohibited from sending data are still permitted to perform local operations and that they may be provided with buffers to store locally generated data until the sever permits them to send their data. High Level Language
A presently preferred embodiment of a messaging language which supports the event- reaction has the following syntax:
<array addresslmessage typexobject namexmethod/data type>(<data lengthxdata>...<data>). The <> indicate bytes, the () are literal characters and the ... shows that the number of data bytes may vary. The array address is five bits and for point to point communications, the array address is set to zero. The message type is three bits and presently six message types are defined. These include (1) exception, (2) data acknowledgement, (3) data event, (4) command acknowledgement, (5) polled response, and (6) command event. The object name is eight bits and is presently limited to ASCII characters @ through Z. The method/data type field is eight bits and is limited to ASCII characters a through z. This field is interpreted as a method when the message type is a command event or a command acknowledgement. There are presently five data types defined: (z) signed character, (y) unsigned character, (x) signed integer, (w) unsigned integer, and (v) floating point number. The data length is eight bits and does not include the left and right parenthesis or the data length itself. The acknowledgement of commands is mandatory and the acknowledgement must echo the address fields of the command.
Smart Telephone Example
Turning now to Figure 9, an exemplary application of an object oriented processor array 10 is a "smart telephone accessory". In this application, the object oriented processor array 10 is provided with a speech recognition processor object 23a, a speech messaging processor object 23b, a caller ID processor object 23c, and a DTMF dialer 23d. Each of these processor objects is contained in the object librtary 20 and is self-instantiated in response to calls from the system object 22 which receives commands from a boot ROM 51 as described above. It will also be appreciated that the boot ROM also includes commands to assign pins to the objects 23a-23d. According to this application, two pins assigned to the speech recognition object 23a are coupled to an external microphone 53, and two pins assigned to the speech message object 23b are coupled to an external speaker 55. In addition, two pins assigned to the caller ID object 23c and the DTMF dialer object 23d are coupled to an RJ-11 plug or jack. The smart telephone accessory application is designed to respond to voice commands by dialing phone numbers associated with particular recognized names. In addition, this application is also designed to respond to incoming calls by announcing the name of the person calling. In order to accomplish these functions, the script server 51 is provided with scripts which are executed in response to a data event from either the speech recognition object 23 a or the caller ID object 23c. An example of a script executed in response to a data event from the speech recognition object 23a is shown below in Table 3.
Figure imgf000030_0001
Table 3
As shown in Table 3, when the speech recognition object registers a data event, the script server looks up the recognized name in a database to find the telephone number associated with the name, in this case "HOME". The script server then sends a command to the DTMF dialer to dial the number associated with HOME in the database. In addition, the script server sends a command to the speech message object to play three messages, e.g. "I am", "calling", "home".
Another example of a script for this application is shown below in Table 4.
Figure imgf000031_0001
Table 4
As shown in Table 4, when the caller ID object registers a data event, the script server looks up the phone number in a database to determine the caller's name. The script server constructs a message to play, e.g. "Jeff is calling", and sends the message to the speech message object.
Distributed (Local) Script Servers
According to another embodiment, seen in Figure 10, one or more object oriented processor arrays are provided with local, internal script servers and no central script server or host processor is needed. In this "distributed scripting" embodiment, communications are controlled according to the event-reaction protocol by providing the output message processor with additional functionality to register and queue data events and an event script look up table to determine which events relate to internal messages. The input message processor is provided with additional functionality to queue and buffer messages destined for objects in the array.
More particularly, an object oriented processor array 110 according to the invention has many similar components as the array 10 described above and the similar components are labelled with similar reference numerals incremented by 100. The array 110 includes an object library 120, a system object 122, and, after initialization, a number of active objects 123a, 123b, etc. The array also has a communications receiver 124a and a communications transmitter 124b, both of which are coupled to an external bus or link 125 for coupling the array 110 to other similar arrays. Messages bound for objects within the array are routed by an input router 126c which receives the messages from a buffer 126b. The buffer 126b buffers messages received from the global input parser 126a and the internal message interface queue 130. Messages which originate external of the object oriented processor array are received by the communications receiver 124a and passed to the global parser 126a. Messages which originate within the array are received by the internal message interface queue 130 which receives the messages from the data event processor 135. The data event processor 135 also provides input to the output queue 132 for transmission to objects external of the array by the transmitter 124b. The data event processor 135 receives input from the pending output event queue 133 and a programmable table of event scripts 137. As in the first embodiment, messages from objects in the array are registered in an output registry 134.
According to this embodiment, the source address of a data event is read by the data event processor 135 from the pending output event queue 133 and used to look up a script associated with that address in the script table 137. The script is executed by the data event processor 135 and typically results in the generation of a message. The message generated by the script may be destined to an object within the array or an object external of the array. In the former situation, the message is passed to the internal message interface queue. In the latter situation, the message is sent to the output queue 132. Thus, the data event processor and the script table function as a local script server within the array and there is no need for a central script server. It should be noted, however, that a central script server may be used with the array 110. Messages will be sent to the central script server when no script for the data event appears in the script table, or when the script associated with the data event causes a message to be sent to the central script server.
This embodiment of the object oriented processor array, like the first embodiment, may be embodied on a single chip using one or more microprocessors with associated RAM and ROM. When so embodied, the array 110 will also include a memory manager 138 and a timing kernel 136. It will be appreciated that the script table may be stored in either RAM or ROM and may be initialized (programmed) at startup by either a boot ROM or by a host processor.
Processor Objects Embodied as Separate Hardware Processors
As mentioned above, the object oriented processors in the object oriented processor array may be embodied as hardware, software, or a combination of hardware and software. Turning now to Figure 11, an object oriented processor array 510 is functionally similar to the object oriented processor array 10 shown in Figure 2. In this embodiment, separate microprocessors 521, 522, and 523a-c... are provided. The microprocessor 521 is used to handle all I/O functions with respect to the array 510. The microprocessor 521, therefore provides the functionality of the communications interface 524, the input message processor 526, the internal output flow manager 530, the external output flow manager 532, and the central output registry 534. The microprocessor 522 provides the functionality of the system object and the microprocessors 523a- c... provide separate processors for each instantiation of an object from the object library 520. According to one embodiment, the library contains software for each object and an object is instantiated by loading software into one of the processors 523a, etc.
Generalized Layout of Object Oriented Processor Arrays
While, to this point, the invention has been described primarily with reference to an embodiment utilizing a microprocessor, where the objects, including the system object is implemented in software, as previously mentioned, the invention can be implemented in numerous ways, including, but not limited to programmed microprocessors, field programmable gate arrays, custom LSIs, etc.. The generic concept of the invention which allows for numerous implementations is seen schematically in Figure 12, where the object oriented processor array 10 has pins 1000a, a bit and format processing ring 1000b, an elastic memory ring 1000c, and a math and logic processing core lOOOd. The pins 1000a are the actual physical link of the object oriented processor array 10 to the real world surrounding it. As will be explained in more detail below, the bit and format processing ring 1000b, the elastic memory 1000c, and the math and logic processing lOOOd implement the objects, with most objects utilizing bit and format processing, memory, and math and logic processing, and with some objects utilizing only bit and format processing with memory, or math and logic processing with memory.
The bit and format processing ring 1000b performs the functions of reducing the highly disparate ensemble of real word interfaces to a regular structure, where the "regular structure" is determined by whatever best fits the needs of the math and logic processing core lOOOd, and high speed, low level control of external devices. Typical functions performed in the bit and format processing ring 1000b include serial I/O, parallel I/O, specific low level control of analog to digital converters, and other processing of signals with high analog content. The implementation of the bit and format processing layer is highly variable. For example, standard microcontroller ports can be mapped into the function, although significant computing resources would be expended. Alternatively, field programmable gate array (FPGA) technology is well-suited to implementing these functions, although the use of FPGA technology is expensive in terms of circuit board area and cost. Many other implementations are possible. In particular, the bit processing portion of the bit and format processing ring can conceptually be viewed as a system which treats the input pins as a vector which is sampled at a high frequency. The act of sampling permits the bit processing to be thought of as a procedural algorithm which contains actions that enables the detection of changes of state to be determined very efficiently. Likewise, the format processing portion of the bit and format processing can be conceptually viewed as as a procedural algorithm for reformatting of data for delivery to the relevant memory construct 1000c. The elastic memory 1000c can be implemented in a broad range of manners. The implementation can include one or more of buffers, FIFOs, LIFOs or stacks, circular buffers, mail boxes, RAM, ROM, etc. The elastic memory ring 1000c can include logic for the computation of pointer locations, etc. Alternatively, this logic can be incorporated into the functionality of the bit and format processing ring 1000b on one side, and the math and logic processing core lOOOd on the other side. Where code is stored for instantiation of objects, the elastic memory 1000b includes the code.
The math and logical processor core lOOOd can be one or more standard processor cores (e.g. Intel® x86, Motorola® 68xxx, Intel® 8051, PowerPC™, or Analog Devices 21xx DSP) which process and transform I/O data into whatever the external device or application requires. In other words, the math and logic processing portion includes the functional layers of the various processor objects. As with the elastic memory and bit and format processing rings, the design of the math and logical processor core lOOOd is highly application dependent.
As suggested above, typically, instantiated objects of the object oriented processor array will utilize one or more pins 1000a, the bit and format processing ring 1000b, the elastic memory 1000c, and the math and logic processing core lOOOd. For example, the speech recognition and speech message objects described above with reference to Figure 9 will typically utilize each of the rings and two pins each. However, certain objects which are internal objects (e.g. an internal database object) do not necessarily need to utilize the bit and format processing ring 1000b or the pins 1000a. Similarly, other objects, such as the encoder object which accepts input from a rotary encoder, buffers the input, and sends the input to the script server, do not require math and logic processing. These objects require only the pins 1000a, the bit and format processing ring 1000b, and the elastic memory 1000c.
There have been described and illustrated herein several embodiments of an object oriented processor array. WTiile particular embodiments of the invention have been described, it is not intended that the invention be limited thereto, as it is intended that the invention be as broad in scope as the art will allow and that the specification be read likewise. Thus, while a particular microprocessor has been disclosed with reference to the presently preferred embodiment, it will be appreciated that other off the shelf or custom microprocessors could be utilized. Also, while specific software has been shown, it will be recognized that other software code could be used with similar results obtained.

Claims

Claims:
1. An object oriented processor array configurable via a message based communications link, comprising: a) processor means for implementing a plurality of virtual processors; b) a writable memory coupled to said processor means; c) readable memory coupled to said processor means, said readable memory containing a system object and a library of functional processor objects, each functional processor object having a predefined functionality; d) a plurality of physical pins for coupling at least one of said functional processor objects to an external device; and e) communications interface means for coupling said object oriented processor array to the message based communications link, wherein said system object responds to a first configuration message sent to said object oriented processor array via the message based communications link by calling a first functional processor object in said library and commanding said first functional processor object to instantiate itself as a virtual processor in said writable memory and said first functional processor object is coupled to at least one of said plurality of physical pins.
2. An object oriented processor array according to claim 1 , wherein: said system object is automatically instantiated when power is applied to said object oriented processor array.
3. An object oriented processor array according to claim 1 , wherein: said first functional processor object has a predefined functionality which is configurable, and said first functional processor object responds to a second configuration message sent to said object oriented processor array via the message based communications link by configuring its predefined functionality.
4. An object oriented processor array according to claim 1, wherein: said system object responds to a second configuration message sent to said object oriented processor array via the message based communications link by calling a second functional processor object in said library and commanding said second functional object to instantiate itself as a virtual processor in said writable memory.
5. An object oriented processor array according to claim 4, wherein: said second functional processor object has a predefined functionality which is configurable, and said second functional processor object responds to a third configuration message sent to said object oriented processor array via the message based communications link by configuring its predefined functionality.
6. An object oriented processor array according to claim 1, further comprising: f) active task list means for listing instantiated virtual processors, wherein said system object responds to said first functional processor object instantiating itself by listing said first functional object as an instantiated virtual processor in said active task hst means.
7. An object oriented processor array according to claim 6, further comprising: g) input message processing means coupled to said communications interface means, to said active task list means, to said system object, and to said instantiated first functional processor object, wherein said input message processing means directs messages received by said object oriented processor array via the message based communications link to one of said system object and said functional processor object.
8. An object oriented processor array according to claim 1, further comprising: f) a memory manager coupled to said writable memory and responsive to calls from each of said functional objects, wherein when said system object calls one of said functional processor objects to instantiate itself in writable memory, said functional processor object called by said system object calls said memory manager, said memory manager finds a pointer to a starting address in said writable memory and gives said pointer to said functional object called by said system object, and said functional object called by said system object stores said pointer in a portion of writable memory pre-assigned to it.
9. An object oriented processor array according to claim 1, further comprising: f) a memory manager coupled to said writable memory and responsive to calls from each of said functional processor objects, wherein said system object calls a first one of said functional processor objects to instantiate itself in said writable memory with a first instantiation name, said first one of said functional processor objects calls said memory manager, said memory manager finds a first pointer to a portion writable memory and gives said first pointer to said first one of said functional processor objects, said first one of said functional processor objects gives said first pointer to said system object, said system object stores said first pointer with a first index in a memory table, stores said first index with said instantiation name in a name table, and stores said first index with said functional name of said first one of said functional objects in a task list table.
10. An object oriented processor array according to claim 1, wherein: said system object responds to a additional configuration messages sent to said object oriented processor array via the message based communications link by calling additional functional objects in said library and commanding said additional functional objects to instantiate themselves as virtual processors in said writable memory, each of said first functional object and said additional functional objects includes means for calculating a worst case time needed to perform its function, upon instantiation of each functional object, said system object collects the worst case time for each instantiated functional object and schedules processor time for each instantiated functional object.
11. An object oriented processor array according to claim 10, wherein: said system object utilizes processor time not used by instantiated functional objects which complete functions in less than worst case time.
12. An object oriented processor array according to claim 10, wherein: said system object schedules processor time for each instantiated functional object by assigning an offset to each instantiated functional object.
13. An object oriented processor array configurable via a message based communications link, comprising: a) a plurality of processor means, each of said plurality of processor means for implementing a functional processor; b) a writable memory coupled to each of said plurality of processor means; c) readable memory coupled to each of said plurality processor means, said readable memory containing a system object and a library of functional processor objects, each functional processor object having a predefined functionality, and; d) communications interface means for coupling said object oriented processor array to the message based communications link, wherein said system object responds to a first configuration message sent to said object oriented processor array via the message based communications link by calling a first functional processor object in said library and commanding said first functional object to instantiate itself with one of said plurality of processor means in said writable memory.
14. An object oriented processor array according to claim 13, wherein: said writable memory includes a separate writable memory coupled to each of said plurality processor means.
15. An object oriented processor array according to claim 13, wherein: said writable memory includes a shared writable memory coupled to each of said plurality processor means.
16. An object oriented processor array configurable via a message based communications link and for use with at least one external device, comprising: a) communications interface means for coupling said object oriented processor array to the message based communications link; b) a plurality of functional objects, each object having a predefined functionality; c) a plurality of physical pins for coupling at least one of said functional objects to the external device, wherein said pins are selectively coupled to said at least one of said functional objects in response to a message sent to said object oriented processor array via the message based communications link.
17. An object oriented processor array configurable via a message based communications link, comprising: a) a processor; b) a writable memory coupled to said processor; c) readable memory coupled to said processor, said readable memory containing a system object and a library of functional processor objects, each functional processor object having a predefined functionality; d) a plurality of physical pins for coupling at least one of said functional processor objects to the external device; and e) a communications interface for coupling said object oriented processor array to the message based communications link, wherein said system object responds to a first configuration message sent to said object oriented processor array via the message based communications link by calling a first functional processor object in said library and commanding said first functional processor object to instantiate itself as a virtual processor in said writable memory and said first functional processor object is coupled to at least one of said plurality of physical pins.
18. An object oriented processor array according to claim 17, further comprising: f) an active task list for listing instantiated virtual processors, wherein said system object responds to said first functional processor object instantiating itself by listing said first functional object as an instantiated virtual processor in said active task hst.
19. An object oriented processor array according to claim 18, further comprising: g) an input message processor coupled to said communications interface, to said active task list, to said system object, and to said instantiated first functional processor object, wherein said input message processotdirects messages received by said object oriented processor array via the message based communications link to one of said system object and said functional processor object.
20. An object oriented processor array according to claim 17, wherein: said system object responds to additional configuration messages sent to said object oriented processor array via the message based communications link by calling additional functional objects in said library and commanding said additional functional objects to instantiate themselves as virtual processors in said writable memory, each of said first functional object and said additional functional objects includes a worst case time calculator which calculates the worst case time needed to perform its function, upon instantiation of each functional object, said system object collects the worst case time for each instantiated functional object and schedules processor time for each instantiated functional object.
21. A method of arbitrating communications between a plurality of message sources and a message target, said method comprising: a) establishing a maximum number of allowed pending messages; b) informing each of the sources what said maximum number is; c) each source monitoring communications with the target to determine a present number of pending messages; d) no source sending a message to the target unless the present number is less than the maximum number, e) each source starting a message to the target by sending a request to the target; f) the present number being incremented each time a request is sent to the target; g) the target responding to a request by a requesting source by commanding the requesting source to send a message; h) the target confirming receipt of each message sent to it; and i) the present number being decremented each time the target confirms receipt of a message.
22. A method according to claim 21, further comprising: j) assigning priorities to at least some of the message sources; k) informing the target of the assigned priorities; and
1) the target responding to multiple pending requests by sources with different priorities by commanding the source with the higher priority to send a message.
23. An object oriented processor system, comprising: a) a host processor; and b) a plurality of object oriented processors coupled to said host processor, at least some of said object oriented processors generating data in response to events, each of said data generating object oriented processors includes request means for sending a request to said host processor, memory means for knowing a maximum permissible number of simultaneous pending requests, and monitoring means for monitoring communications with said host processor to determine a present number of pending requests, wherein i) each of said data generating object oriented processors sends a request to said host processor before data can be sent to said host processor; ii) each of said data generating object oriented processors is aware of a maximum permissible number of simultaneous pending requests; iii) each of said data generating object oriented processors monitors communications with said host processor to determine a present number of pending requests; iv) none of said data generating object oriented processors sends a request to said host processor unless the present number is less than the maximum number.
24. An object oriented processor system according to claim 23, wherein: each of said data generating object oriented processors includes means for incrementing the present number, and said host porcessor includes command means for commanding an object oriented processor to send a message, wherein v) the present number is incremented each time a request is sent to said host processor; and vi) said host processor responds to a request by commanding the requesting object oriented processor to send a message.
25. An object oriented processor system according to claim 24, wherein: said host processor includes confirmation means for confirming receipt of messages sent to said host processor, and each of said data generating object oriented processors includes means for decrementing the present number, wherein vii) said host processor confirms receipt of each message sent to it; and viii) the present number is decremented each time said host processor confirms receipt of a message.
26. An object oriented processor system according to claim 25, wherein: said host processor has priority memory means for associating at least some of said data generating object oriented processors with different priorities, wherein ix) at least some of said data generating object oriented processors have assigned priorities; x) said host processor is aware of the assigned priorities; and xi) said host processor responds to multiple pending requests by data generating object oriented processors with different priorities by commanding the data generating object oriented processor with the higher priority to send a message.
27. An object oriented processor system, comprising: a) a host processor; and b) a plurality of object oriented processors coupled to said host processor, at least some of said object oriented processors generating data in response to events, each of said data generating object oriented processors includes a request generator, memory, and a communications monitor, wherein i) each of said data generating object oriented processors sends a request via said request generator to said host processor before data can be sent to said host processor; ii) said memory of each of said data generating object oriented processors contains an indication of a maximum permissible number of simultaneous pending requests; iii) each of said data generating object oriented processors monitors communications with said host processor via said communications monitor to determine a present number of pending requests; iv) none of said data generating object oriented processors sends a request to said host processor unless the present number is less than the maximum number.
28. An object oriented processor system according to claim 27, wherein: each of said data generating object oriented processors includes a present number incrementer coupled to said memory and said communications monitor, and said host processor includes a command message generator, wherein v) each of said present number incrementers increments the present number each time said communications monitor indicates that a request is sent to said host processor; and vi) said host processor responds to a request by sending a command message via said command message generator to the requesting object oriented processor instructing the requesting object oriented processor to send a message.
29. An object oriented processor system according to claim 28, wherein: said host processor includes a confirmation message generator, and each of said data generating object oriented processors includes a present number decrementer coupled to said memory and said communications monitor, wherein vii) said host processor confirms receipt of each message sent to it by sending a confirmation message via said confirmation message generator; and viii) each of said present number decrementers decrements the present number each time said communications monitor indicates that said host processor confirms receipt of a message.
30. An object oriented processor system according to claim 29, wherein: said host processor has a priority memory, wherein ix) at least some of said data generating object oriented processors have assigned priorities stored in said priority memory; x) said host processor responds to multiple pending requests by data generating object oriented processors with different priorities by commanding the data generating object oriented processor with the higher priority to send a message.
31. A distributed processing system, comprising: a) a plurality of object oriented processors, each having a predefined functionality, and each capable of generating at least one data event; b) a script server coupled to each of said object oriented processors, said script server containing a plurality of executable scripts, each being linked to an identifying data event, wherein upon the generation of a data event by one of said object oriented processors, said script server executes the script associated with the data event.
32. A distributed processing system according to claim 31, wherein: said script server is a separate processor coupled to said a plurality of object oriented processors by a message based communications link.
33. A distributed processing system according to claim 31, wherein: at least two of said plurality of object oriented processors are implemented as virtual processors in a single hardware microprocessor.
34. A distributed processing system according to claim 32, wherein: at least two of said plurality of object oriented processors are implemented as virtual processors in a single hardware microprocessor.
35. A distributed processing system according to claim 33, wherein: said script server is implemented as a virtual processor in said single hardware microprocessor.
36. A distributed processing system according to claim 31, wherein: said plurality of object oriented processors are arranged in discrete processor arrays.
37. A distributed processing system according to claim 36, wherein: said script server comprises a plurality of script servers, each being associated with one of said arrays.
38. A distributed processing system according to claim 37, wherein: at least one of said arrays is a single hardware microprocessor.
39. A distributed processing system, comprising: a) a plurality of object oriented processors, each having a predefined functionality, each capable of generating at least one data event, and each having an address; b) a script server coupled to each of said object oriented processors, said script server containing a plurality of executable scripts, each being linked to one of said addresses, wherein upon the generation of a data event by one of said object oriented processors, said script server executes the script associated with the address of the object oriented processor which generated the data event.
40. A distributed processing system according to claim 39, wherein: at least one of said scripts, when executed, results in said script server sending a command to one of said object oriented processors.
41. A distributed processing system according to claim 39, wherein: at least one of said scripts, when executed, results in said script server sending data to one of said object oriented processors.
42. A method of data processing in a distributed processing system having a plurality of object oriented processors, at least one of which is a data generating processor, said method comprising: a) coupling a script server to said plurality of object oriented processors; b) providing the script server with at least one script containing instructions for processing data generated by the at least one data generating processor; c) providing the data generating processor with instructions to send data to the script server for processing in accord with the at least one script.
43. A method according to claim 42, wherein: the at least one data generating processor is a first plurality of data generating processors, and said step of providing the script server with at least one script containing instructions includes providing a first plurality of scripts corresponding in number with the first plurality of data generating processors, each script containing instructions for processing data from a respective one of the data generating processors.
44. A method according to claim 43, wherein: at least one of the first plurality of scripts includes instructions for forwarding data to one of the object oriented processors.
45. A method according to claim 43, further comprising: d) coupling another processing system to the script server, wherein at least one of the first plurality of scripts includes instructions for forwarding data to the other processing system.
PCT/US1999/000307 1998-01-07 1999-01-07 Object oriented processor arrays WO1999035548A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2000527869A JP2002542524A (en) 1998-01-07 1999-01-07 Object-oriented processor array
AU24522/99A AU2452299A (en) 1998-01-07 1999-01-07 Object oriented processor arrays
CA002317772A CA2317772A1 (en) 1998-01-07 1999-01-07 Object oriented processor arrays
EP99904036A EP1121628A2 (en) 1998-01-07 1999-01-07 Object oriented processor arrays

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US09/003,684 1998-01-07
US09/003,993 1998-01-07
US09/004,174 US6052729A (en) 1997-01-29 1998-01-07 Event-reaction communication protocol in an object oriented processor array
US09/003,684 US6567837B1 (en) 1997-01-29 1998-01-07 Object oriented processor arrays
US09/003,993 US6615279B1 (en) 1997-01-29 1998-01-07 Central and distributed script servers in an object oriented processor array
US09/004,174 1998-01-07

Publications (2)

Publication Number Publication Date
WO1999035548A2 true WO1999035548A2 (en) 1999-07-15
WO1999035548A3 WO1999035548A3 (en) 2000-10-26

Family

ID=27357464

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/000307 WO1999035548A2 (en) 1998-01-07 1999-01-07 Object oriented processor arrays

Country Status (5)

Country Link
EP (1) EP1121628A2 (en)
JP (1) JP2002542524A (en)
AU (1) AU2452299A (en)
CA (1) CA2317772A1 (en)
WO (1) WO1999035548A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7146479B2 (en) 2001-07-18 2006-12-05 City U Research Limited Method and apparatus of storage allocation/de-allocation in object-oriented programming environment
US7487507B1 (en) 2001-07-18 2009-02-03 City U Research Limited Secure control transfer in information system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5165018A (en) * 1987-01-05 1992-11-17 Motorola, Inc. Self-configuration of nodes in a distributed message-based operating system
US5307495A (en) * 1987-10-23 1994-04-26 Hitachi, Ltd. Multiprocessor system statically dividing processors into groups allowing processor of selected group to send task requests only to processors of selected group
US5634070A (en) * 1995-09-08 1997-05-27 Iq Systems Distributed processing systems having a host processor and at least two object oriented processors which communicate directly with each other
US5692193A (en) * 1994-03-31 1997-11-25 Nec Research Institute, Inc. Software architecture for control of highly parallel computer systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5165018A (en) * 1987-01-05 1992-11-17 Motorola, Inc. Self-configuration of nodes in a distributed message-based operating system
US5307495A (en) * 1987-10-23 1994-04-26 Hitachi, Ltd. Multiprocessor system statically dividing processors into groups allowing processor of selected group to send task requests only to processors of selected group
US5692193A (en) * 1994-03-31 1997-11-25 Nec Research Institute, Inc. Software architecture for control of highly parallel computer systems
US5634070A (en) * 1995-09-08 1997-05-27 Iq Systems Distributed processing systems having a host processor and at least two object oriented processors which communicate directly with each other

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7146479B2 (en) 2001-07-18 2006-12-05 City U Research Limited Method and apparatus of storage allocation/de-allocation in object-oriented programming environment
US7487507B1 (en) 2001-07-18 2009-02-03 City U Research Limited Secure control transfer in information system

Also Published As

Publication number Publication date
WO1999035548A3 (en) 2000-10-26
JP2002542524A (en) 2002-12-10
CA2317772A1 (en) 1999-07-15
EP1121628A2 (en) 2001-08-08
AU2452299A (en) 1999-07-26

Similar Documents

Publication Publication Date Title
US6567837B1 (en) Object oriented processor arrays
CN1050916C (en) System for implementation-independent interface specification
US6311238B1 (en) Telecommunication switch with layer-specific processor capable of attaching atomic function message buffer to internal representation of ppl event indication message upon occurrence of predetermined event
US5428781A (en) Distributed mechanism for the fast scheduling of shared objects and apparatus
EP1514191B1 (en) A network device driver architecture
US4562535A (en) Self-configuring digital processor system with global system
AU649642B2 (en) Communications interface adapter
EP0362107B1 (en) Method to manage concurrent execution of a distributed application program by a host computer and a large plurality of intelligent work stations on an SNA network
US5764915A (en) Object-oriented communication interface for network protocol access using the selected newly created protocol interface object and newly created protocol layer objects in the protocol stack
EP0312739B1 (en) Apparatus and method for interconnecting an application of a transparent services access facility to a remote source
CA2245963C (en) Distributed kernel operating system
JPH06202883A (en) Equipment for communication between processes and method therefor
US6052729A (en) Event-reaction communication protocol in an object oriented processor array
WO2002031672A2 (en) Method and apparatus for interprocessor communication and peripheral sharing
JP2008306714A (en) Communicating method and apparatus in network application, and program for them
US6118862A (en) Computer telephony system and method
US6615279B1 (en) Central and distributed script servers in an object oriented processor array
EP1121628A2 (en) Object oriented processor arrays
Hitz et al. Using Unix as one component of a lightweight distributed kernel for multiprocessor file servers
JPH0256699B2 (en)
US6272524B1 (en) Distributed processing systems incorporating a plurality of cells which process information in response to single events
US20030145129A1 (en) Protocol driver application programming interface for operating systems
WO1988008162A1 (en) Data transfer system for a multiprocessor computing system
Maginnis Design considerations for the transformation of MINIX into a distributed operating system
KR19980086588A (en) System Resource Reduction Tool Using TCP / IP Socket Application

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

ENP Entry into the national phase in:

Ref country code: CA

Ref document number: 2317772

Kind code of ref document: A

Format of ref document f/p: F

Ref document number: 2317772

Country of ref document: CA

ENP Entry into the national phase in:

Ref country code: JP

Ref document number: 2000 527869

Kind code of ref document: A

Format of ref document f/p: F

NENP Non-entry into the national phase in:

Ref country code: KR

WWE Wipo information: entry into national phase

Ref document number: 1999904036

Country of ref document: EP

AK Designated states

Kind code of ref document: A3

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

122 Ep: pct application non-entry in european phase
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWP Wipo information: published in national office

Ref document number: 1999904036

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1999904036

Country of ref document: EP