METHODS AND APPARATUS FOR A GENERAL PURPOSE REASONING PLATFORM
Inventors:
Anthony J. Richter Michael lass Johan Anwar Russell H. Dewey
Malcolm Beale Mahaveer Pareek
BACKGROUND OF THE INVENTION
Neural networks have been the mainstay of development efforts trying to duplicate human brain functioning with artificial means. The success of neural nets can be attributed to their connectionist modeling which imitates the human brain's physical structures. In a neural network, neurodes imitate the brain's neuron cell bodies which perform the qualitative evaluation functions. Connections between cells imitate other brain structures which carry the output of one neuron to the inputs of others with varying degrees of signal strength.
Conventional neural networks typically are designed with several common elements to their architecture. For example, modern designs of neural networks usually arrange identical neurodes in a series of parallel layers. All of the cells in one layer are connected to all of the cells in the subsequent layer. Undesired connections are assigned a weight of zero, effectively nullifying their operation. There are no connections between neurodes of non-successive layers, including no intra-layer connections.
This basic model has changed little over the years and the majority of research and development efforts focus on finding better ways to determine the proper weights for the connections between the neurodes, and on finding strengths and weaknesses in the various
evaluation functions performed by the neurodes in different neural network implementations.
The weights for the connections in a neural network are determined by a process called "training." Large volumes of a priori correctly matching input and output vectors from statistical data are fed to the neural network during its training mode and the training algorithm resolves a best set of connection weights to maximize the number of correct outputs produced.
This method of training or programming a neural network performs well where a large and accurate statistical database is available. A large problem domain exists, however, which could benefit from aspects of the neural network processing paradigm but for which the necessary statistical data sets do not exist, often because they are not readily collectible. Ironically, such data sets are often not readily collectible because they are contained in the structure the neural network itself is trying to model: a human brain.
An example of such a case is where a seasoned expert operates a chemical coating process. The expert has learned to achieve consistent results in the face of varying temperatures, material qualities, and chemical purities. The influencing factors and their relative importance are accurately mapped into the neurons of the expert's brain and the expert has a sense of them. But the expert is unlikely to be able to fabricate a statistical data set capable of programming his or her experience into a neural network device. Hence there is a need for a new paradigm which lends itself to programming from information that can be obtained from an expert's own sense of his experiential knowledge. In part, because of the shortcomings of neural network technology, a separate branch of computing and cognitive science attempts to address the creation of expert systems. The development of expert systems has centered around the creation of text-based languages. These languages differ from standard procedural computing languages in that they emphasize the capture and representation of expert knowledge which representation lends itself to processing by an underlying, generalized knowledge processing program. In distinction, standard procedural languages emphasize the representation of a process. While expert systems provide well-defined structures for codifying expert knowledge, in practice, they can be difficult to develop for problems of any complexity. The textual languages employed do not directly reveal the underlying processing. Programming changes can be difficult because, as with any text-based language, related program
segments may not be visually or proximally connected. And while in some cases, an expert system can be asked as to how it reached a certain output or conclusion, its overall internal functioning is hidden during normal operation keeping other information, perhaps as important as its conclusion, from the user; e.g., why a particular result did not occur. Hence there is a need in the art for a paradigm that easily conveys its knowledge relationships and processing to the user, preferably both during development and execution.
Another branch of computing and cognitive science, called visual programming, deals with the non-textual specification of a computing process. Generally, visual programming involves representing sub-processes iconically on a visual display and representing the flow of data or control between those sub-processes with connecting lines. Sub-processes sharing a common aspect, such as a category of operation, may share a common visual representation, such as a rectangular shape.
Visual programming language development tools are not used to visualize the programmed process during its execution. There are two primary reasons for this. First, the development process is operationally isolated from the execution process. Once a program is created using the visual language development tool, the resultant program specification is usually converted into a text-based programming language, then compiled, then executed. These steps may not necessarily occur at the same time, on the same machine, or using programs capable of communicating bi-directionally with one another. This isolates the visualizing component from the executing component.
Secondly, the information communicated to the user by dynamically changing the appearance of the program's visual design image may not necessarily be meaningful. This is because the visual objects manipulated to develop the program specification may not be related with sufficient specificity to the "objects" employed to finally execute the program. For example, while all rectangular boxes may each represent an operation involving the manipulation of data elements within the computer, there are countlesss possible manipulations which may be performed on countless possible types, items, organizations, and collections of data. So, making a rectangular box appear to blink on a user display during program execution will not likely be able to communicate more to the user than the fact that some manipulation is being performed on some data. This information is probably not meaningful and goes more to visualizing the process flow within the
computer rather than the real data in which the user is interested. Hence there is a need for visual programming where a more object-specific relationship exists between what the user sees on the screen during devlopment and the underlying mechanisms employed to execute the program. The present invention departs from the conventional architectural features of today's neural networks and provides a paradigm for the object-specific representation and execution of processing-related knowledge in brain-like, cell-based form, easily recognized, understood and manipulated by the user.
SUMMARY OF THE INVENTION
The present invention provides methods and apparatus for a general purpose reasoning platform. The general purpose reasoning platform may be constructed using a modern personal computer with a graphical operating system, widely known in the art, as a foundation. Additional processor hardware may be added to enhance the execution speed of the reasoning platform. Additional interface hardware may be added to enhance the problem domain to which the general purpose reasoning platform may be applied.
The general purpose reasoning platform provides the functionality to execute reasoning "programs" represented in the form of cell maps. Cell maps are created by establishing connections between functional nodes, displayed in orderly fashion. Functional nodes come in various types, and perform relatively well-defined and specific operations. A functional node's base operation may be tailored via parameters specified during cell map development. The links, too, may be of various types, and tailored via the specification of operating parameters during cell map development. The types of functional node base operations may imitate the operations of various physical structures in the human brain. Embodiments of the general purpose reasoning platform may provide the ability both to develop and to execute cell maps. During development, the cell map is visualized on a display device. A pointing device is chiefly used to direct the building up, tearing down, or modification of relevant portions of the cell map, and such changes are reflected in the visual display of the map. During execution, the cell map operation may also be visualized on a display device.
Such visualization provides the user with an unusually high level of information regarding process and data states.
The general purpose reasoning machine is especially well suited to applications of process control, though its use is certainly not so limited. Persons who are experts on a particular process may advantageously use the visually oriented development mode of the reasoning platform to model their experience and knowledge about the process. That model is then easily executed using execution objects which closely correlate to the objects used for cell map development. Visualization of the executing cell map further advantages debugging and use of the cell map by providing the user with a "high bandwidth" picture of processing and data states.
These and other objects and advantages of the invention will become apparent to one of ordinary skill in the art upon consideration of the following Detailed Description and the Figures.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 illustrates a computer environment which may be utilized according to one embodiment.
Figure 2 illustrates a Graphic User Interface (GUI) displaying a matrix view according to one embodiment.
Figure 3 illustrates a GUI displaying a hexagonal or paradigm view according to one embodiment.
Figure 4 illustrates a correlation map corresponding to the Cell map illustrated in Figure 32 in a run random test mode according to one embodiment. Figure 5 depicts a cell map programmed for counting active inputs in matrix view according to one embodiment.
Figure 6 illustrates a watch group display corresponding to the cell map illustrated in Figure 32 in a run random test mode according to one embodiment.
Figure 7 illustrates the relationship between the various major software classes according to one embodiment.
Figure 8 illustrates a CVisualObject class and related classes according to one embodiment.
Figure 9 illustrates the relationship between a CMode class and related classes according to one embodiment. Figure 10 is a control flow diagram for a CNormalMode object according to one embodiment.
Figure 11 is a control flow diagram for a CDragSelectMode object according to one embodiment.
Figure 12 is a control flow diagram for a CDragMode object according to one embodiment.
Figure 13 is a control flow diagram for a CDragLinkMode object according to one embodiment.
Figure 14 illustrates assigning a link type based upon cursor position according to one embodiment.
Figure 15 illustrates the program flow for link character determination based on relative cursor position.
Figure 16 is a control flow diagram for a CAddVisualObjectMode object according to one embodiment.
Figure 17 is a control flow diagram for a CAddGroupObjectMode object according to one embodiment. Figure 18 is a control flow diagram CAddColumnMode object according to one embodiment.
Figure 19 is a control flow diagram for a CDeleteColumnMode object according to one embodiment.
Figure 20 is a control flow diagram for a CAddRowMode object according to one embodiment.
Figure 21 is a control flow diagram for a CDeleteRowMode object according to one embodiment.
Figure 22 is a control flow diagram for a CRunMode object according to one embodiment. Figure 23 illustrates an Alpha nucleate formation according to one embodiment.
Figure 24 illustrates a Beta nucleate formation according to one embodiment.
Figure 25 illustrates a Motor Response nucleate formation according to one embodiment.
Figure 26 illustrates a Gamma nucleate formation according to one embodiment. Figure 27 illustrates a Delta nucleate formation according to one embodiment.
Figure 28 illustrates a Epsilon nucleate formation according to one embodiment.
Figure 29 illustrates a Gamma nucleate addressing scheme according to one embodiment.
Figure 30 illlustrates one method of building nucleate formations. Figure 31 is a control flow diagram for mapping a cell map in Matrix view to a cell map in a Paradigm View according to one embodiment.
Figure 32 illustrates a programmed cell map and addresses in a matrix view.
Figure 33 illustrates addresses in a Paradigm view associated with the programmed cell map shown in Figure 32. Figure 34 illustrates the CRunAlgorithm class and related classes.
Figure 35 illustrates the program flow for one Backtrace process.
Figure 36 depicts the screen display for a backtracing run.
Figure 37 illustrates one possible process flow for the execution of a cell map.
Figure 38 depicts a complex cell map in matrix view.
Figure 39 depicts a process for running a cell map on a serial instruction computer.
Figures 40a-c depict representative flowcharts for cell map object directed at external input and output signal processing which are not mapped into the nucleate structure.
Figures 41a-c depict representative flowcharts for cell types mapped into the nucleat structure excluding Cognition cells.
Figures 42a-g depict representative flowcharts for individual types of Cognition cells.
Figures 43a-c depict representative flowchart for various types of Links. Figure 44 depicts a cell map in matrix view at one point during its execution.
Figure 45 depicts a cell map in hexagonal (paradigm) view with ghost cells.
Figure 46 depicts a zoomed-in portion of a cell map in hexagonal view with ghost cells.
Figure 47 depicts a zoomed-in portion of a cell map in hexagonal view without ghost cells.
DETAILED DESCRIPTION OF THE INVENTION
The invention is methods and apparatus for a general purpose reasoning platform. In the following description, numerous details are set forth for the purpose of explanation. However, one of ordinary skill will realize that the invention may be practiced without the use of these specific details. In other instances, well-known structures and devices are shown in block diagram form in order not to obscure the description of the invention with unneccesary detail. I. Hardware Environment and Overview
Figure 1A illustrates a computer environment which may be utilized according to the present invention. In particular the computer environment includes: host processor 100, application software 120, an output device 110 for displaying graphical images, a character input device 112, a pointer input device 114, a cell-based reasoning single-processor hardware card (N-card) 130, and a cell-based reasoning parallel processing computer hardware card (N-computer) 140. A. Hardware and Overview
B. Host Processor The host computer 100 is of the personal computer variety widely known in the art. The host computer comprises a central processing unit, memory, data storage mechanisms, interface circuitry, and an operating system program, all widely known in the art and not shown individually in Figure 1A.
The cell-based reasoning application software 120 executes on the host processor and includes: a user interface and development component 122, a cell map executor component 124, an N-card interface component 128, and an N-computer interface component 126. The user interface and development component 122 provides for display, development, management, control, and execution of Richter Paradigm cell maps. The cell maps serve as the representations of cell-based reasoning programs and will be described in detail in later sections.
Once a cell map is created, it can be executed using host processor 100 CPU resources by the cell map executor component 124 of the application software 120. Optionally, a cell map may be executed using the computing resources of either an N-card 130 or an N- computer 140. In such a case the N-card interface component 128 or the N-computer interface component 126 of the applications software 120 communicates bidirectionally
with the N-card or the N-computer, respectively. In such operation, the appropriate application software interface component 126 or 128 sends a digitized cell map representation, and possibly software, to the respective hardware device 140 or 130, and receives status and result information back for display on the output device 110. C. Cell-based Card (or N-card)
The N-card 130 contains a single microprocessor 132. Software running on the microprocessor is highly optimized for cell map execution. D. Cell-based Computer (N-computer) The N-computer 140 contains one or more master microprocessors 142 and an array of cell-level microprocessors 144. Each of the cell-level microprocessors 146 performs the duties of one cell in the cell map and has its own memory 148. The cell-level microprocessors 146 operate in parallel and their operations are managed and coordinated by the master microprocessors 142.
It is noted that the host computer, N-card, and N-computer are representative examples of hardware devices which may be used in the execution of a cell map. Embedded devices, widely known in the art, such as a microcontrol unit (MCU) or multichip module (MCM), are examples of hardware which may be employed to execute a cell map without departing from the spirit of the invention.
II. System Architecture
Figure IB illustrates the principal operational flow of one software embodiment. The software can be viewed in terms of its two major functional components, cell map development 122 and cell map execution 124. When first started, the software initializes in cell map development mode, displaying a set of function-specific cells available to the user via logical block 150. By in large, development mode consists of user inputs to manipulate the cells, and connections between them, to form a useful cell map. The main loop of the development component 122 thus begins with user input, as showiu by logical block 154. Logical block 156 determines whether the user input targets one of the functional cell positions. If it does, logical block 158 determines whether the functional cell at the targeted position has already been established. If not, a functional cell is established by instantiating an object to represent the cell, as shown by logical block 160. Whether new or previously established, the targeted cell is modified to reflect any changes
desired by the user as shown by logical block 162. The visual display of the cell is then updated, if necessary, to reflect changes made by the user, as shown by logical block 164. Control then returns to the top of the loop via logical block 152.
If the user input did not target a functional cell, a determination is made as to whether the user input targets a link between cells, as shown by logical block 166. If it does, logical block 168 determines whether the link at the targeted position has already been established. If not, a link is established by instantiating an object to represent the link, as shown by logical block 170. Whether new or previously established, the targeted link is modified to reflect any changes desired by the user as illustrated in logical block 172. The visual display of the link is then updated, if necessary, to reflect changes made by the user, as shown in logical block 164. Control then returns to the top of the loop via logical block 152.
If the user input does not target a link, a determination is made as to whether the input targets the performance of some general program operation or whether the user input directs a transfer of control to the cell map execution component 124. This determination is represented by logical block 174. If the user input targets some general program operation, e.g., saving the cell map to disk, then the requested operation is performed, as shown by logical block 176. If the user input directs a transfer of control to the cell map execution component 124 to run the cell map, then control passes accordingly. When a run of a cell map is initiated, a determination may be made whether objects for cell map execution are already established. This determination is shown by logical block 178. In one embodiment, the objects instantiated during development, as shown by logical blocks 160 and 170, each have the capacity to perform the cell's or link's operational responsibilities at run time. Alternatively, separate objects could be used for execution. Separate objects may be used, for example, when running the cell map using an N- computer, where the execution object may be a distinct microprocessor with memory. These an other alternatives exist and may be practiced without departing from the spirit of the invention.
If the execution objects are not already established, then they are established, as shown by logical block 180. Whether or not the execution objects were previously established, the cell map executes using the by-now instantiated execution objects, as shown by logical
block 182. When cell map execution ends, control returns to the top of the development component main loop via logical block 152.
Figure 1C illustrates object-specific architecture. Via a user interface viewing mechanism 190, a user sees a functional cell object 194 useful in creating an operating cell map program. The object the user sees 194 has a direct correspondence to an available execution cell object 196 used by the execution platform 192 to run the cell map program. This direct correspondence between the visualized objects from which a user can build cell map programs, e.g., 194, and the objects provided by the execution platform to run a cell map program, e.g., 196, defines the object-specific nature useful in the practice of the invention.
This object-specific character is advantageous because it allows a seamless flow from development to execution. Furthermore, is allows a seamless flow from execution to development. This "backwards" flow is useful for visualizing cell map execution because the development component already contains a complete infrastructure for visualizing the cell map.
A. User Interface Figure 2 illustrates a Graphical User Interface (GUI) 120 according to the present invention. GUI 120 may be displayed on output device 110 as illustrated in Figure 1A. GUI 120 consists of a variety of graphical elements. In particular GUI 120 includes: Main menu 201, Toolbar 202, Palette map 205, Status bar 203 and Work area 204. Work area 204 is used to display a cell map 250 which includes a plurality of displayable visual objects. These displayable visual objects include cells and links which are described in detail below. Many of the functions in Main menu 201 may also be accessed through Toolbar 202 and Palette map 205. 1. Main menu
Main menu 201 allows a user to perform many of the functions of the present invention. In particular, Main menu 201 includes a plurality of menus: a) File 201a; b) Edit 201b; c) View 201c; d) Add 20 Id; e) Run 20 le; f) Cell map 20 If; g) Hardware 20 lg; h) Options 201h; i) Window 201i; and j) Help 20 lj. Most of the primary functions in main menu 201 are listed below. i. File menu
File menu 201a allows for opening a window in which a cell map may be created. Furthermore, File menu 201a may be used to close a cell map and/or save a cell map. File menu 201a is also used to print cell maps. Finally, a cell map may be downloaded to another device capable of executing the cell map such as the N-card or N-computer. ii. Edit menu
Edit menu 201b has typical editing functions used in constructing a cell map such as undo, cut, copy and paste functions. iii. View menu View menu 201 c allows the user to determine the view of a cell map. In particular,
View menu 201c includes a function to transform a cell map shown in Matrix view as illustrated in Figure 2 to a Hexagonal view (or Paradigm view) as illustrated in Figure 3. Furthermore, the View menu 201c may transform a Hexagonal view to a Matrix view. View menu 201c also has a ghost function which toggles on and off the display of ghost or unmarked objects. Ghost objects allow for a user to easily construct a cell map. For example, cell map 250 in Figure 2 displays ghost visual objects, such as ghost Cause cell 152 as well as marked visual objects, such as cause cell 251. Cause cell 251 is marked
when a user positions a cursor over a corresponding ghost visual object and presses a mouse button. The previous ghost visual object is then replaced with a colored marked visual object. Marked visual objects are then added to an executable object list used in running a cell map. View menu 201c also has provides for toggling an isolate function. When toggled on, the isolate function causes a subdued visual appearance of cell map objects not directly or indirectly connected to the object, or set of objects, selected at the time the isolate function is toggled on. This effectively highlights a selected object, or set of objects, and all objects by which it may be affected, or which it may affect, according to the current cell map configuration.
Likewise, view menu 101c enables a user to determine whether cell addresses and names are displayed. A user may also determine whether a meaningful textual identifier is displayed for links which are drawn in a discontinuous fashion to improve overall cell map visibility. Such links are called partial links and the related drawn portions point to one another by displaying arrowheads, and optionally, the text identifier.
A user may also access a Correlation map as illustrated in Figure 4 through View menu 201c. The Correlation map illustrated in Figure 4 corresponds to the programmed cell map illustrated in Figure 32. A matrix of causes and effects along with a color-coded frequency chart is shown in the Correlation map. In particular, the column containing Cl, C2 and C3 corresponds to the Cause cells associated with InputA, InputB, and InputC, as shown in Figure 32, respectively. Likewise, the row of effects, El, E2, E3 and E4 correspond to TPS Effect cells One, Two, Three and HitMax cells, as shown in Figure 32, respectively. Panels 400 to 41 1 generate color codes representing the frequency with which a particular cause generates an effect according to the color scale 420. For example, if every time the Cause cell associated with Input A is fired, TPS One is likewise fired, the color associated with Panel 400 would be the color associated with 100% in color code 420. Likewise if TPS One never fired, regardless of how many times the Cause cell associated with Input A fired, then the color associated with Panel 400 would be the color associated with 0% in color code 420. Likewise the relationships between the other causes and effects or Cause cells and Effect cells may be displayed.
Figure 6 illustrates a watch graph, which may be accessed through View menu 221c, corresponding to the programmed cell map shown in Figure 32. In particular, Figure 6
illustrates the number of times the TPS Effect One cell is fired during a run random test mode of the cell map illustrated in Figure 32. A watch graph may be constructed of any cell in a cell map.
View menu 201c also enables a user to determine whether a Toolbar 207, Palette map 205, or Status bar 203 is displayed.
View menu 201c also includes a Select function which allows a user to select all at once the a) incoming paths, b) outgoing paths, c) inputs or d) outputs of the cell or cells selected at the time the View menu item is chosen. iv. Add menu Add menu 20 Id is used to add displayable visual objects or combinations of displayable visual objects to cell map 250 in Work area 204. After a user selects the appropriate cell or link type in View menu 20 Id, a user then positions the cursor over a ghost object and presses a left mouse button. The ghost object is then marked and a corresponding executable cell object or link object is added to an executable object list. In particular, Add menu 20 Id may be used to add or mark a cause cell 251 in cell map 250 in Work Area 204. Add menu 20 Id may also be used to add a Receptor cell 206a in Receptor cell column 206. Likewise, a Sensor cell 207a in Sensor cell column may be added. Cognition cells 208a, 209a, or 210a in Cognition columns 208, 209 and 210, respectively, may likewise be added. Other types of Cognition cells such as an Adder cell, ID Table cell, 2D Table cell, Multiplier cell, Divider cell and, Comparator cell may also be added. A Motor Response cell 21 la in Motor Response column 211 may also be added along with Timer cell 253. A Threshold Potential Stimulus ("TPS") Effect Cell 254 and Subthreshold Potential Stimulus ("SPS") Effect Cell 255 may also be added.
An input and output group of cells may be added, as well as inserting a row of cells. An input group of cells may include a cause, receptor and sensor cell with respective links. An output group of cells may include a Motor Response, TPS, and SPS cells with respective links. v. Run menu Run menu 20 le allows a user to run a cell map in either a fast, slow, step or batch mode. Run menu 201e also allows a user to run a cell map in a test Random mode and a test Exhaustive mode. In a test Random mode, a user does not have to click on a Cause cell in order to alternate a firing state. In the test Random mode, firing states of Cause
cells are randomly altered. For example, in Figure 32 the firing state of the Cause cell associated with Input A may be toggled and then another random Cause cell's firing state would then be toggled, e.g., Input C. Thus, a random selection of firing states for associated Cause cells would be toggled on and off. In a test Exhaustive mode all the possible sequences of firing states for associated
Cause cells would be tested. For example the Cause cell associated with Input A of Figure 32 would have a firing state toggled off and on. The Cause cell associated with Input B would then have a firing state toggled on and off. Next, the firing states for the Cause cells associated with Input A and Input B would then be toggled on and off. The firing state for the Cause cell associated with Input C would then be toggled on and off while the firing states associated with the Cause cells of Input A and Input B would not be altered. The sequence of toggling firing states on and off for associated Cause cells would then continue similar to a binary counter in order to test all the possible combinations of Cause cell firing states. Further, Run menu 20 le allows for backtracing as described in detail below. vi. Cell Map menu Cell map menu 20 If allows for groups of unmarked or ghost visual objects in a cell map display in Work area 204 to be added or deleted. For example, a row of unmarked cells may be inserted or deleted into a cell map. Likewise, a column of unmarked cells may be inserted or deleted into a cell map. When a row or column of unmarked visual objects are added, any marked objects links would be shifted without upsetting any of the corresponding links.
In Cell map menu 20 If, a user may add a ghost Gamma nucleate, as described below, to a cell map. A debug map may be used to find and highlight all unset values, unconnected cells, or unset tables by placing them into the selected state and into a corresponding, readily identifiable visual appearance.
A cell map may also be viewed as text. Table 1 as seen below describes, in text, the cell map illustrated in Figure 32. As can be seen by Table 1 , all the inputs and outputs of the various cells along with link definitions can described textually. Accordingly, a user can transition from a visual image or cell map of reasoning to a textual image. TABLE 1.
Cell Map Information File: 'InputCounter2'. Information about all objects:
Cause: InputA No cause number set
0 inputs, 1 output
Output #1 : To Receptor Via Link
Receptor cell:
1 input, 1 output. Input #1 : From Cause 'InputA', via Link
Output #1 : To Sensor Via Link
Cause: InputB
No cause number set.
0 inputs, 1 output. Output # 1 : To Receptor Via Link
Receptor cell:
1 input, 1 output.
Input # 1 : From Cause 'InputB', via Link Output #1 : To Sensor Via Link Cause: InputC
No cause number set.
0 inputs, 1 output.
Output #1 : To Receptor Via Link
Receptor cell: 1 input, 1 output.
Input #1 : From Cause 'InputC, via Link Output # 1 : To Sensor Via Link
Sensor cell:
1 input, 3 outputs. Input #1 : From Receptor via Link
Output #1 : To CognitionCell Via WeightedLink Output #2: To CognitionCell Via WeightedLink Output #3: To CognitionCell Via WeightedLink
Sensor cell: 1 input, 3 outputs.
Input #1 : From Receptor via Link Output #1: To CognitionCell Via WeightedLink Output #2: To CognitionCell Via WeightedLink Output #3: To CognitionCell Via Weighted Link Sensor cell:
1 input, 3 outputs.
Input #1 : From Receptor via Link
Output #1 : To CognitionCell Via Weighted Link
Output #2: To CognitionCell Via WeightedLink Output #3: To CognitionCell Via Weighted Link
Cognition Cell: Activation potential = 1. 5 inputs, 1 output.
Input # 1 : From Sensor via WeightedLink Input # 2: From Sensor via WeightedLink Input # 3: From Sensor via WeightedLink Input # 4: From CognitionCell via WeightedLink Input # 5: From CognitionCell via WeightedLink
Output # 1 : To MotorResponse via TPSLink
Cognition Cell: Activation potential = 2. 4 inputs, 2 outputs. Input # 1 : From Sensor via WeightedLink
Input # 2: From Sensor via WeightedLink Input # 3: From Sensor via WeightedLink Input # 4: From Sensor via WeightedLink Output # 1 : To CognitionCell via WeightedLink Output # 2: To MotorResponse via TPSLink
Cognition Cell:
Activation potential = 3.
3 inputs, 4 outputs.
Input # 1 : From Sensor via WeightedLink Input # 2: From Sensor via WeightedLink
Input # 3: From Sensor via WeightedLink Output # 1 : To CognitionCell via WeightedLink Output # 2: To CognitionCell via WeightedLink Output # 3: To MotorResponse via TPSCellLink Output # 4: To CognitionCell via WeightedLink
Motor Response cell:
1 input, 1 output.
Input # 1 : From CognitionCell via TPSLink Output # 1: To TPSEffect 'One'. Via Link Motor Response cell:
1 input, 1 output.
Input # 1 : From CognitionCell via TPSLink Output # 1 : To TPSEffect Two' Via Link
Motor Response cell: 1 input, 1 output
Input # 1 : From CognitionCell via TPSLink Output # 1: To TPSEffect 'Three'. Via Link
TPS Effect: One No effect number set.
1 input, 0 outputs.
Input # 1 : From MotorResponse via Link
TPS Effect: Two No effect number set. 1 input, 0 outputs.
Input # 1 : From MotorResponse via Link
TPS Effect: Three No effect number set.
1 input, 0 outputs. Input # 1 : From MotorResponse via Link
Cognition cell: Activation potential = 1.
2 inputs, 2 outputs.
Input # 1 : From CognitionCell via WeightedLink Input # 2: From CognitionCell via WeightedLink
Output # 1 : To CognitionCell via WeightedLink Output # 2: To MotorResponse via TPSLink
Motor Response cell: 1 input, 1 output. Input # 1 : From Cognition via TPSLink
Output # 1 : To TPSEffect ΗitMax'. Via Link
TPS Effect: HitMax No effect number set 1 input, 0 outputs. Input # 1 : From MotorResponse via Link
Link:
From Cause 'InputA'. To Receptor
Link:
From Cause 'InputB'. To Receptor Link:
From Cause 'InputC. To Receptor
Link:
From Receptor to Sensor
Link: From Receptor to Sensor
Link:
From Receptor to Sensor
Weighted Link: Weight = 1. From Sensor to CognitionCell
Weighted Link:
Weight = 1
From Sensor to CognitionCell
Weighted Link: Weight = 1 From Sensor to Cognition Cell
Weighted Link:
Weight = 1.
From Sensor to CognitionCell
Weighted Link: Weight = 1
From Sensor to CognitionCell
Weighted Link:
Weight = 1
From Sensor to Cognition Cell Weighted Link:
Weight = 1. From Sensor to CognitionCell
Weighted Link: Weight = 1 From Sensor to CognitionCell
Weighted Link:
Weight = 1
From Sensor to Cognition Cell
Weighted Link: Weight = 99
From CognitionCell To CognitionCell
Weighted Link:
Weight = 99
From CognitionCell To CognitionCell Weighted Link:
Weight = 99
From CognitionCell To CognitionCell TPS Link:
From CognitionCell to MotorResponse TPS Link:
From CognitionCell to MotorResponse
TPS Link:
From CognitionCell to MotorResponse
Link: From MotorResponse to TPSEffect 'One'.
Link:
From MotorResponse to TPSEffect 'Two'.
Link:
From MotorResponse to TPSEffect 'Three'. Weighted Link:
Weight = 1. From CognitionCell to CognitionCell
Weighted Link: Weight = 1. From CognitionCell to CognitionCell
TPS Link:
From CognitionCell to MotorResponse
Link:
From MotorResponse to TPSEffect 'HitMax'. vii. Hardware Menu
Hardware menu 20 lg is used to check neurocomputer 140 as illustrated in Figure 1A.
Hardware menu 20 lg is also used to transfer a file between the host and the neurocomputer 140. viii. Options menu Options menu 20 lh is used to set various parameters in the system, such as whether ghost objects are displayed and whether a cell map is run on a particular platform or hardware. The options menu is also used to specify whether the cell map should be executed using integer or real numbers cell map operation. ix. Window menu Window menu 20 li is used to manipulate the various windows. For example, Window menu 20 li may be used to open a new window for current view. Window menu 20 li may also be used to cascade, tile, or arrange icons for the windows. Also window menu 20 li has zoom functions. x. Help menu Help menu 20 lj is used in order to help a user with the various functions.
2. Work area
As described above, Work area 204 is used to create a Cell map. In particular, Work area 204 is used to create the various visual objects and links comprising a cell map.
Interacting with Work area 204 depends upon the current mode of the system and the
particular actions taken by a user using an input device such as a mouse. The full description of the modes and behavior of a Mode Manager object is described below.
3. Palette map
Palette map 205 is a palette for adding displayable visual objects and combinations of displayable visual objects to a cell map as illustrated in Work area 204. Many of these functions may also be found in Main menu 201. Palette map 205 is primarily used for frequently used functions as is known by one of ordinary skill in the art. For example, when a cursor is positioned over button 205a and a left mouse button is pressed, a ghost cause cell may be marked when the cursor is repositioned over a ghost cause cell and a left mouse button is pressed again.
4. Toolbar
Toolbar 202 may also include many of the functions illustrated in Main menu 201. Toolbar 202 may include type, print, save, and cut functions found in Edit menu 201b. For example, when a cursor is positioned over button 202a and a left mouse button is pressed, various files may be opened. The Toolbar 202 contains unique icons which each correspond to a menu selection accessible from the Main menu 201. Toolbar structures are widely known in the art.
5. Status bar
Status Bar 203 is used to provide information to a user regarding the current state of the system as is known by one of ordinary skill in the art.
B. Cell-based Reasoning Software Application Architecture 1. Visual objects As illustrated in Figure 2 and Figure 3, the present invention uses a plurality of displayable visual objects in displaying reasoning. The displayable visual objects used in constructing a cell map are described below. a) Cell and link types i. Cause cell
Cause cells, such as Cause cell 251 in Figure 2, are used to represent an input value for a cell map. A Cause cell may have an associated value and firing state assigned by a user or may obtain these values and states from alternate sources. A user sets the Cause cell value and firing state by clicking on the displayed cell image in an appropriate run mode. The clicking toggles the firing state of the cell if operating in binary/integer mode, or
brings up a dialog box for specifying the value (and implicitly the firing state, i.e., specifying zero suppresses firing) if operating in real number mode.
In alternate embodiments, a cell map may not include Cause cells. Inputs may be input directly into a Receptor cell. ii. Receptor cell
Receptor cells, such as Receptor cell 206a, illustrated in Figure 2, are input cells which function as a buffer between a Cause and a Sensor cell. Receptor cells have only one input and one output. The output connects directly to one Sensor Cell input. The input of a Receptor cell is generated by a Cause cell. When a Cause cell is activated, a connecting Receptor cell will fire. No activation potential is required and no weighted outputs are associated with a Receptor cell. There are three Receptor cells in an Alpha nucleate, 18 in a Beta nucleate, and 108 in a Gamma nucleate. iii. Sensor cell Sensor Cells, such as Sensor cell 207a as illustrated in Figure 2, are input signal conditioning/processing cells. They have one input from a Receptor cell. Sensor cells have five weighted outputs and two stimulus outputs (subpotential stimulus and threshold potential stimulus). There is no activation potential associated with a Sensor cell. The weighted outputs can be output to Cognition cells and thus be incorporated into the cell- based reasoning process. The two stimulus output signals may be used to fire Motor Response or Timer cells. The weighted output of a Sensor cell are activated when the Sensor cell is fired. This happens directly as a result of connecting Receptor cells being activated at the input of the Sensor cell. There are three Sensor cells in an alpha nucleate, 18 in a beta nucleate, and 108 in a gamma nucleate. iv. Cognition cell Cognition cells, such as Cognition cell 208a as illustrated in Figure 2, are the main processing cells within the cell-based reasoning process and come in a variety of types. The Basic Cognition cell, or Richter Cognition cell, performs a summation and comparison function. The only type of input to a Cognition cell is a weighted value. It should be noted that the number of weighted inputs to the Basic Cognition cell is unlimited, except as log available machine resources. Cognition cells have five weighted outputs and two stimulus output links (TPS and SPS). Unlike other Cells, the Basic Cognition cells have an Activation Potential which provides a comparison datum for the function of the cells.
The Activation Potential in this embodiment is assigned by a user. The Basic Cognition cell will fire, if the sum of the weighted inputs to the Cell equals or exceeds the value of the Cell's Activation Potential. When the Cognition cell fires, the weighted outputs are activated as is the TPS output signal. When the Cognition cell transitions to the non-firing state, the subpotential stimulus goes active as the only active output. v. Adder cell Adder cells, such as Adder cell 261 as illustrated in Figure 2, are types of Cognition cells. An Adder cell performs a summation function. Weighted inputs to an Adder cell are summed and the result is output during an Adder cell firing state. The output may also be weighted or scaled by a corresponding output weight link. vi. ID Table cell ID Table cells, such as lDTable cell 262 as illustrated in Figure 2, are types of Cognition cells. A lDTable cell performs a look-up table function. A lDTable cell has one input and multiple outputs. A user, by way of Edit menu 101b, may assign a predetermined output value based upon an assigned weighted input value. vii. 2D Table cell 2D Table cells, such as 2DTable cell 263 as illustrated in Figure 2, are types of Cognition cells. A 2DTable cell performs a look-up table function. A 2DTable cell has two inputs and multiple outputs. A user, by way of Edit menu 101b, may assign a predetermined output value based upon two assigned weighted input values. viii. Multiplier cell Multiplier cells, such as Multiplier cell 260 as illustrated in Figure 2, are types of Cognition cells. A Multiplier cell performs a multiply function. Weighted inputs to a Multiplier cell are multiplied and the product is output during a Multiplier cell firing state. The output may also be weighted or scaled by a corresponding output weight link. ix. Divider cell Divider cells, such as Divider cell 265 as illustrated in Figure 2, are types of Cognition cells. A Divider cell performs a division function. Weighted inputs to a Divider cell are divided and the quotient is output during a Divider cell firing state. The output may also be weighted or scaled by a corresponding output weight link.
In one embodiment a division by zero produces a zero result, not infinity. And because a division of zero produces zero, a non-zero output can only be produced in such an embodiment when there are two non-zero inputs. x. Comparator cell Comparator cells, such as Comparator cell 264 as illustrated in Figure 2, are types of
Cognition cells. The Comparator cell performs a comparison function in which the greatest input is output during firing. The output may also be weighted or scaled by a corresponding output link weight. xi. Timer cell Timer cells, such as Timer cell 253 as illustrated in Figure 2, are used for sequencing inputs to Motor Response cells. A Timer cell may have multiple inputs and multiple outputs to Motor Response cells. In particular, a user may enter a delay time for delaying a signal input before the signal is output from the Timer cell to a Motor Response cell. The Timer cell may also hold or latch the output of an input signal for a period of time specified by a user. xii. Motor Response cell Motor response cells, such as Motor Response cell 21 la, are responsible for activating an Effect cell, and are the only type of cell that can perform this function. A Motor Response cell is fired in response to stimulus link inputs and may have multiples of each stimulus input type. If an input on a threshold potential stimulus link fires a Motor
Response cell, a corresponding TPS Effect cell is activated. The input on a subpotential stimulus link may also fire a Motor Response cell and activate a corresponding SPS Effect cell.
In one embodiment, the threshold potential stimulus and subpotential stimulus outputs from a Motor Response cell can never occur together, hence a Motor Response cell can activate only one Effect cell at a time, depending upon the type of stimulus input causing the Motor Response cell to fire. Alternatively, a Motor Response cell's threshold potential stimulus and subpotential stimulus outputs may fire concurrently, following the states of the respective inputs. In such a case, the Motor Response cell could activate multiple Effects at a time.
Motor Response cells are evident in Gamma nucleate, and higher-order nucleus. The alpha and beta nucleate formations do not have Motor Response cells. Each Gamma nucleate contains 84 motor response cells. xiii. TPS Effect cell TPS Effect cells, such as TPS Effect cell 254 as illustrated in Figure 2, are Effect cells which are activated in response to an upstream threshold potential stimulus input. A TPS Effect cell has one input from a Motor Response cell. A TPS Effect cell may also output to a Cause cell. A TPS Effect cell is activated when a corresponding threshold potential stimulus signal is output from a Motor Response cell. In alternate embodiments, a cell map may not include a TPS Effect cell. Outputs may be output directly from a Motor Response cell. xiv. SPS Effect cell SPS Effect cells, such as SPS Effect cell 255 as illustrated in Figure 2, are Effect cells which are activated in response to an upstream subpotential stimulus input. An SPS Effect cell has one input from a Motor Response cell. An SPS Effect cell may also output to a Cause cell. An SPS Effect cell is activated when a corresponding subpotential stimulus signal is output from a Motor Response cell.
In alternate embodiments, a cell map may not include a SPS Effect cell. Output may be output directly from a Motor Response cell. xv. Weighted links
Weighted links, such as weighted link 269, are used to connect cell inputs to cell outputs. A user may assign a weight, or scaling value, to a weighted link. The weight is multiplied by the output value from a cell and the product is output from the link to a cell input. xvi. TPS and SPS links
TPS and SPS links, such as TPS link 270, are links used to connect cell outputs to a Motor Response or Timer cell input. These links are used to carry TPS and SPS signals or values. TPS and SPS links do not have weights. xvii. Last update links Last Update links, such as Last Update link 271, are a special type of Weighted link.
Cell maps are executed in "steps" described in detail below. Normally, the weighted output of a link is immediately available at the target cell input, and may be utilized within
the same processing step. The weighted output of a Last Update link, however, is not available to its target cell input until the beginning of the next cell map step. xviii. Effect-to-Cause links Effect-to-Cause links, such as Effect-to-Cause link 272, connect a TPS or SPS Effect output, to a Cause input, providing a mechanism for the cell map to feed back to itself. An Effect-to-Cause link can either set or reset the target Cause input when the originating Effect output fires, or, in real number mode, it may set the Cause value from the value it receives on its originating end. b) Cell-based Reasoning application software classes and objects
Figure 7 illustrates the major objects and their classes of the cell-based reasoning machine application software according to the present invention. The three primary classes include the CWinAPP class 701, the CDocument class 700 and the CView class 703. Representative objects in the hierarchy of the respective classes include an Application object, a Document object and a View object. The Application object belongs to the CNrmAPP class 702 which is derived from the CWinAPP class 701. Document objects belong to the CNrmDocument class 705 which is derived from CDocument class 700. View objects belong to the CNrmView class 704 which is derived from the CView class 703. There is one Application and it represents the currently executing instance of the cell-based reasoning software itself as it runs on the host processor. The Application can manage multiple Documents at once, each acting as a repository for all of the data in a single cell map as described in detail below. Each Document may manage multiple Views, each View providing a mechanism for the user to interact with the given Document. A Document of CNrmDocument class 705 may relate to a number of Run Algorithm objects of CRunAlgorithm class 706 or its derivations 707. Objects in this class hierarchy are used to run a specified cell map. In addition, the Document object may manage and coordinate the execution platform used to run a related cell map. Alternatively, different execution platforms may be implemented by defining different classes under the CRunAlgorithm 706 base class. A Document of CNrmDocument class 705 has no more than one Mode Manager object of the CModeManager class 713. A Mode Manager object of the CModeManager class 713 relates to many objects of classes deriving from CMode class 714. These objects are
responsible for specific modes of operation. The CMode derived classes 715 are discussed in detail in relation to subsequent figures.
A Document of CNrmDocument class 705 relates to a Constraints object of CConstraintList class 711. The Constraints object is essentially a managed list of individual Constraint objects of CConstraint class 712. Each Constraint object represents a rule regarding permissible relationships between cells and links. For example, one constraint object could specify that a TPS link may terminate and connect at a Motor Response cell input. Figure 7A depicts the cell linking rules for one embodiment, which rules, or constraints, would be represented by a plurality of members of the Cconstraint class 712, each referenced by the CConstraintsList class 711 object.
Finally, a Document of CNrmDocument class 705 relates to one List-of-cells-and-links object of C Visual ObjectList class 708. This List-of-cells-and-links object is essentially the cell map and relates to multiple Displayable objects of CVisualObject class 709. A CExecutableObject class 710 is derived from the CVisualObject class 709. c) Visual objects and classes
Figure 8 illustrates relationships between the displayable visual objects according to the present invention. A List-of-cells-and-links object in the CVisualObjectList class 708 manages the list of objects in the CVisualObject class 709. The responsibilities of objects in this CVisualObjectList class 708 include providing functions for accessing and setting elements in the list of objects in class 709. Furthermore, the objects in class 708 must know how much memory has been used by the list and is allocated to the list. This object in class 708 should also be able to nondestructively resize the list. Objects in the CVisualObjectList class 708 must also provide a set of functions for creating new objects in the CVisualObjects class 709 and adding them to the list. Finally, CVisualObjectList class 708 objects must provide a set of functions for deleting objects and any reference to them from the list.
The CVisualObject class 709 includes the objects that may be drawn in Work area 204 in creating a cell map. The CVisualObject class 709 is the root of the visual objects inheritance tree. The responsibilities of a Displayable object in the CVisualObject class 709 includes drawing the visual object. The Displayable object in CVisualObject class 709 also must detect mouse inputs . Further, a Displayable object in the CVisualObject class 709 must keep track of whether or not it has been marked or is drawn in a frontmost
view. An object in the CVisualObject class 709 must be able to return and set the object's name along with the object's class type. Finally, an object in the CVisualObject class 709 must be able to archive itself to or from a storage location.
Components of the Cell Map objects in the CExecutableObject class 710 are used in running a cell map which includes defined links and marked cells. The responsibilities of an object in the CExecutableObject class 710 includes examining the firing states of the object inputs, in order to determine whether the object is firing.
CCell class 857 is a base class for all cell types. The responsibility for an object in the CCell class 857 includes knowing the object's position on a screen in both Hexagonal and Matrix views. Objects in CCell class 857 must also allow links to be attached to the object and keep track of a list of input and output links connected to the object. An object in a CCell class 857 must also test the validity of any potential link connections.
Objects in the CCause class 858 represent a Cause cell. The responsibilities of an object in the CCause class 858 include knowing an associated cause value. Objects in the CReceptor class 859 represent a Receptor cell and have inherited responsibilities.
Objects in the CSensor class 860 represent a Sensor cell and have inherited responsibilities.
Objects in the CCognition class 861 represent a Basic Cognition cell. The responsibilities of objects in a CCognition Cell class 861 include knowing an associated action potential value.
Objects in the CAdder class 881 represent an Adder cell. The CAdder class 881 is derived from the CBaseCognition class 861. The responsibilities of an object in the CAdder class 881 includes summing its weighted inputs. Objects in the C 1 DTable class 882 represent a 1 D Table Lookup cell. The C 1 DTable class 882 is derived from the CBaseCognition class 861. The responsibilities of an object in the C lDTable class 882 includes knowing an assigned input value, and retrieving an output value from a one-dimensional data array using the input value as an index.
Objects in the C2DTable class 883 represent a 2D Table Lookup cell. The C2DTable class 883 is derived from the CBaseCognition class 861. The responsibilities of an object in the C2DTable class 883 includes knowing two assigned, ordered input values, and
retrieving an output value from a two-dimensional data array using the input values as row and column indices.
Objects in the CMultiplier class 884 represent a Multiplier cell. The CMultiplier class 884 is derived from the CBaseCognition class 861. The responsibilities of an object in the CMultiplier class 884 includes multiplying two weighted inputs.
Objects in the CDivider class 885 represent a Divider cell. The CDivider class 885 is derived from the CBaseCognition class 861. The responsibilities of an object in the CDivider class 885 includes dividing two weighted inputs.
Objects in the CComparator class 886 represent a Comparator cell. The CComparator class 886 is derived from the CBaseCognition class 861. The responsibilities of an object in the CComparator class 886 includes comparing its weighted inputs and outputting the greatest value found.
Objects in the CMotorResponse class 865 represent a Motor Response cell and have only inherited responsibilities in this embodiment. Objects in the CTimer class 866 represent a Timer cell. A CTimer object has a responsibility of knowing when the object is triggered. Furthermore, an object in the CTimer class 866 must know the object's delay and hold time.
Likewise, objects in the CEffect class 862 represent an Effect cell. An object in this class must know an associated effect value. Objects in CTPSEffect class 863 and CSPSEffect class 864 represent a TPS Effect cell and a SPS Effect cell, respectively. An object in either one of these classes has only inherited responsibilities and each of these classes is derived from the CEffect class 862.
CLink class 853 is a link base class. The responsibilities of objects in the CLink class 853 is to know which cells are connected to the object. CWeightedLink class 854 is derived from CLink class 853. Each object in
CWeightedLink class 854 represents a weighted link. These objects have the responsibility of knowing an associated weight value. The objects in the CTPSLink class 855 and CSPSLink class 856 are derived from CLink class 853 and represent a TPS Link and SPS Link, respectively. Objects in these classes have only inherited responsibilities in this embodiment. Objects in the CLastUpdateLink 880 are also derived from the CLink class 853 and have the responsibility of knowing an associated weight value.
An object in the CPositionList class 867 manages a list of positions on various cell map views for an object in the CCell class 857 or a class derived therefrom. Objects in the CPositionList class 867 are responsible for returning world-space coordinates that correspond to the position of the displayable visual object for a particular view. The responsibility of objects in the CPosition List class 867 include providing a function for accessing an indexed list of positions and returning world-space coordinates using the specified index.
The CHexPosition class 869 is dervied from CPosition class 868. Objects in the CHexPosition class 869 are used by Sensor, Receptors, Cognition and Motor Response cells to calculate their position in a cell map Hexagonal view. Furthermore, an object in this class must provide a set function, which receives an address as an argument for a drag and drop function.
The CCartesianPosition class 870 is derived from CPosition class 868. The responsibilities of objects in the CCartesianPosition class 870 include returning the real world-space position of a displayable visual object in a Matrix view. Furthermore, the objects should provide a set function, which receives a world-space Cartesian position as an argument for a drag and drop function. 2. Mode manager a) Classes and objects Figure 9 illustrates the CModeManager class 713 and related CMode class 714, also illustrated in Figure 7. The Mode Manager object in CModeManager class 713 manages the switching between operational modes of the cell-based reasoning machine. An operational mode defines the application's response to user inputs, e.g., the click of a mouse button. An object in a class deriving from CMode class 714 represents a mode of operation and is a mode class object.
The Mode Manager object of the CModeManager class 713 has the responsibility of managing a stack of mode class objects. Thus the Mode Manager object is able to create, push and pop mode class objects in response to user commands. Further responsibilities of the Mode Manager object includes passing mouse events from a View object to the current mode. The Mode Manager object also is able to cancel all the current modes and drop into a normal mode as well as provide an ability to go into a specified mode.
The responsibilities of a Mode Class object includes receiving mouse events. Specifically the Mode Class object must receive MouseMove, LeftButtonUp, LeftButtonDown, LeftButtonDouble, RightButtonDown, RightButtonUp, RightButtonDouble inputs or events from a mouse. Mode Class objects must also provide a function to define the mouse cursor as well as provide a context-sensitive menu to appear on a RightButtonDown event. i. CNormalMode
An object in the CNormalMode class 916 has the responsibility of marking visual objects when a LeftButtonDown event is received and a cursor is over a visual object or a user has selected an object from Palette map 205. In other words, a ghost visual object will be filled or a visual object will be presented with a predetermined color to signal to a user that the visual object has been marked. An object in this class also opens an attribute menu for a selected visual object when a RightButtonDown event occurs. An object in the CNormalMode class 916 also must change to a drag mode, drag link mode, or drag select mode object when appropriate.
Figure 10 illustrates a control flow diagram for a CNormalMode object 916, as illustrated in Figure 9, based on user actions according to the present invention. In particular, Logic block 916a represents the actions taken after specific mouse events generated by user. If a right button of a mouse is pressed twice within a predetermined period of time, attribute information is displayed as illustrated in Logic block 916b.
A determination is made in Logic block 916c whether a left button of a mouse is depressed generating a LeftMouseDown event and a cursor in the Work area 204 is positioned over a Marked link or cell ("hit"). If a link or cell is hit, the visual object under the mouse cursor is stored as illustrated in Logic block 916k and operation reverts back to Logic block 916a. If neither a link or a cell is hit, then drag Select Logic block 915 is entered.
Logic block 916c determines if a visual object has been selected and a LeftMouseDown event has occurred. If a visual object is not selected, operation reverts back to Logic block 916a. Otherwise, a determination is made whether the cursor has moved a predetermined distance as illustrated in Logic block 916d. If the cursor moved has not moved a predetermined distance control is passed to Logic block 916a. If the
mouse has moved a requisite distance a determination is made whether only links have been selected as illustrated in Logic block 916e. If more links have been selected then control is passed to Drag Logic block 914. Logic block 916f determines whether a single link has been selected. If a single link is not selected then an error message is generated as illustrated by Error Message Logic block 916g. If a single link is selected, control transferred to Drag Link Logic block 911.
Logic block 916i determines whether the "Shift" key has been depressed. If the Shift key has not been depressed visual objects are all deselected as illustrated in Logic block 916h. Regardless of the Shift Key position, the stored item's select state is toggled as illustrated in Logic block 916j and control is transferred to Logic block 916a. The use of the Shift key in combination with a mouse click to allow multiple objects to be in the 'selected' state concurrently, is widely known in the art. ii. CDragSelectMode An object in the CDragSelectMode class 915, as illustrated in Figure 9, is responsible for dragging a rectangle (marquee) which is used for selecting groups of objects.
Responsibilities of objects in class 915 include displaying a rubber-banded rectangle on the screen as a mouse moves. An object in class 915 is also responsible for toggling selection states of objects inside the rectangle in respect to a LeftButtonUp event. If the Shift Key is not depressed at the time of the LeftButtonUp event, any objects on the map in the selected state are toggled to the unselected state. Regardless of the Shift Key position, all objects inside the rectangle are toggled to selected state.
Figure 11 illustrates a control flow diagram for a CDragSelectMode object according to the present invention. Figure 11 illustrates a DragSelect Logic block 915 according to the present invention. In particular, Logic block 915a illustrates the mouse events which may trigger the transfer of control to various logic blocks in Drag Select Logic block 915. In particular, Logic block 915b determines if a user has formed a marquee around a group of displayable objects. If the marquee is not in the window, the window is scrolled as illustrated in Logic block 915c. Control is then transferred to 'track marquee' Logic block 915d. The marquee containing visual objects is tracked as illustrated by Logic block 915d and controls transfer to Logic block 915a.
A determination is made in Logic block 915e whether the Shift key has been depressed. If the Shift key has not been depressed, all the visual objects are deselected as
illustrated in Logic block 915f. Regardless of the Shift key position,, the visual objects in the marquee are toggled as illustrated by Logic block 915g. Control is then transferred to the previous mode as illustrated by Previous Mode Logic block 915h. iii. CDragMode An object in CDragMode class 914, as illustrated in Figure 9, drags the current selected group of cells. Responsibilities of objects in class 914 include displaying the cells in outline as the cursor moves around the screen. CDragModeObjects also must cause the cells and connected links to be redrawn in their new positions.
Figure 12 illustrates a control flow diagram for an object in CDragMode class 914 according to the present invention. In particular, Logic block 914a represents mouse events which trigger the control from Logic block 914. Upon entering, a new position of selected objects is calculated, as illustrated by Logic block 914c. An outline of selected objects is then drawn as illustrated in Logic block 914b.
A determination is made whether a cursor is in the window, as illustrated by Logic block 914d. If the cursor is not in the window the window is scrolled, as illustrated in Logic block 914e, and control is passed to Logic block 914f. Logic block 914f calculates the new position of the selected object in Work area 204 and passes control to Logic block 914g which erases the old outline of a selected object. An offset from the old position of the selected object is updated, as illustrated by Logic block 914h and a new selected object outline is drawn as illustrated by Logic block 914i. Control is then transferred from Logic block 914i back to Logic block 914a.
Logic block 914j illustrates moving the selected object after a LeftMouseUp event. The view is then updated, as illustrated by Logic block 914k, and control is transferred to the previous mode, as illustrated in Logic block 9141. iv. CDragLinkMode
An object in CDragLinkMode class 911 drags a single selected link. The responsibilities for an object in the CDragLinkMode class 91 1 include displaying the link and an outline of the link as a cursor moves in Work Area 204. Further, the object allows a user to move the start of the link, the end of the link, or reshape the link. The object also causes a link to be redrawn into a new position and updates relevant cells of changes in connected links.
Figure 13 illustrates a control flow diagram for an object in the CDragLinkMode class 911 according to the present invention. Upon entering CDragLinkMode object 911, a determination is made whether a shape of a link is changed during dragging, as illustrated in Logic block 911b. If the shape is changed a recalculation in link shape is made by Logic block 91 le. If a shape is not being changed the nearest appropriate cell is found in Logic block 91 lc. If an appropriate cell is found, the link is recalculated as illustrated by Logic block 91 Id. If an appropriate cell is not found control is transferred to Logic block 911a. After recalculating a link shape as illustrated by Logic block 91 le or recalculating a link as illustrated by Logic block 91 Id, the old link is erased as illustrated by Logic block 911 f. Likewise, the new link is superimposed as illustrated in Logic block 911 g and control is transferred to Logic block 911a.
Logic block 91 lh determines whether the shape of the link has been changed after a LeftMouseUp event. If the shape has been changed then the view is updated as illustrated by Update View logic block 911 i. Otherwise connections are updated as illustrated in Logic block 91 lj which then transfers control to Logic block 91 li. After completing Logic block 91 li control is transferred to the previous mode as illustrated in Logic block 91 Ik. v. CConnectLinkMode An object in the CConnectLinkMode class 912 operates similar to objects in the CDragLinkMode class 911 except that this mode changes the link type to suit the nearest cell. Responsibilities of objects in CConnectLinkMode class 912 include determining the kind of link that should be connected to the nearest cell, and further, the object should change the link type of a current link if necessary.
For example, Figure 14 illustrates cursor positions CP1 and CP2 in connecting a link from Cognition cell Cl to a Timer cell Tl and Cognition cell C2 to a Timer cell T2, respectively. In particular, Figure 14 illustrates connecting the respective cells after pressing a link button in Palette map 205. An invisible sensitized display field 1431 is co- located with each timer cell T1,T2. The display field 1431 is divided into two zones 1441a ,1441b. As can be seen, depending upon the zone 1441 which includes the cursor position, CP1 or CP2, either a TPS Effect link or SPS Effect link is marked in response to a LeftButtonUp event. The cursor icon toggles between images indicating a TPS or SPS character depending upon the location of the cursor with respect to the zones 1441 about the Timer cell.
A similar determination or link character depending upon cursor position may be used for other cells, such as Divider cells. The cursor position may determine whether a value on a link will be either a divisor or dividend.
Figure 15 illustrates the program flow for link character determination based on relative cursor position. At initialization, a display field sensitized to cursor movements and positioning is co-located about a displayed cell and divided into zones, as shown by logical block 1502. As many zones are created as there are different link character possibilities to be determined by cursor position. After initialization, normal processing occurs until such time as the cursor enters the sensitized display area, as shown by logical block 1504. A determination is made as to which zone includes the current cursor position, as shown by logical block 1508. Depending on the zone so determined, the visible cursor image may be changed to inform the user of the currently chosen link characteristic, as shown by logical blocks 1510-1514.
Other processing then continues, as shown by logical block 1516, including the processing of cursor movements in and about the sensitized display field which may cause reentry to the just described zone determination logic, as shown by logical block 1506. When a mouse button is finally clicked a determination is made as to the final location of the cursor as indicated by logical block 1518. Depending on the zone so determined, a connection is then created having the corresponding character, as shown by logical blocks 1520-1524.
The number of logical blocks which set the cursor image, like 1510-1514, depends on the number of zones used in a given embodiment. Similarly the number of logical blocks which create a link of a certain character, like 1520-1524, depends on the number of zones used in a given embodiment. vi. CAddVisualObjectMode
An object in the CAddVisualObjectMode class 910, as illustrated in Figure 9, is responsible for marking cells and thereby adding them to a cell map. An object in class 910 is also responsible for creating links by way of putting them in a default position on the map. The object then may switch into connect link mode to change the link's position. Figure 16 illustrates a control flow diagram for a CAddVisualObjectMode object.
Logic block 910b determines a cursor shape and transfers control to Logic block 910a. When a user clicks a button on Palette map 205, the cursor changes to the shape shown on
the button. For example, when a user clicks on button 205a, the cursor changes to the shape of a miniature Cause cell.
Logic block 910c determines the type of ghost object the cursor is positioned over in a ghost mode. If ghost objects are not used, a determination of whether the cursor is in the proper area is made. A decision is then made whether the object can be created, i.e., whether there are available cell positions of that type, in Logic block 910d. If the object cannot be created control reverts back to Logic block 910a. Otherwise, an object of the required type is created as illustrated by Logic block 910e. The object is then marked or added to the main object list as illustrated in Logic block 91 Of and the object is displayed as illustrated in Logic block 910g. A determination is made whether to link the object as illustrated in Logic block 91 Oh. If the object is not linked, control is transferred back to Logic block 910a, otherwise control is transferred to connect link Logic block 910i where the object is linked. vii. CAddObjectGroupMode An object in the CAddObjectGroupMode class 906, as shown in Figure 9, is responsible for adding groups of cells and links to a cell map. In particular an object in the CAddObjectGroupMode is responsible for adding a list of visual objects to a current document. A group of cells and links may perform a specific function and may be stored in a library. Figure 17 illustrates a control flow diagram of a CAddGroupObjectMode object in a
CAddObjectGroupMode class 906 according to the present invention. A determination is made in Logic block 906a whether a group of displayable objects have been selected. A group of objects may be selected and eventually marked. A group of cells and links may be added by selecting the cursor button in Palette menu 205, and copying and pasting the group of cells. A determination is then made whether a group of objects can be created as illustrated by Logic block 906b. If the group of objects cannot be created an error message is generated as illustrated by Logic block 906c and control is transferred to Logic block 906a. Otherwise objects of the type required are generated in Logic block 906d. The object is then marked or added to the main object list in Logic block 906e and the object is displayed in Logic block 906f. Control is then transferred to Logic block 906a.
The CAddlnputGroupMode class 909, the CAddOutputGroupMode class 908 and the CAddLineGroupMode class 907 are all derived from the CAddObjectGroupMode class
906. In particular, the CAddlnputGroupMode class 909 contains objects which adds groups of input cells and links to a cell map. Objects in this class are responsible for creating an object list consisting of cause, receptor, sensor and their connecting links, and add them to a cell map. A group of input visual objects may be added by way of Add menu 201d or Palette map 205.
Similarly, CAddOutputGroupMode class 908 contains objects which adds groups of output cells and links to a cell map. The responsibility of objects in this class includes creating an object list consisting of a motor response cell, TPS effect cell and SPS effect cell and their connecting links. .A group of output visual objects may be added by way of Add menu 901 d or Palette map 905.
Likewise, objects in the CAddLineGroupMode class 907, as shown in Figure 9 adds a row of cells and respective links from a cause cell to an effect cell in a cell map. In particular, objects in this class are responsible for creating a an object list consisting of one complete row of cells linked together and adds them to a cell map. A matrix view row of visual obj ects may be added by way of Add menu 201 d or Palette map 205. viii. CAddColumnMode Figure 18 illustrates a control flow diagram for an object in the CAddColumnMode class 902 as illustrated in Figure 9 according to the present invention. An object in Class 902 is responsible for inserting a column of ghost Cognition cells into a cell map. A determination is made in Logic block 902a whether a Insert Column has been selected in Cell map menu 20 If. An Insert Column selection may occur by generating a LeftMouseDown event when a cursor is positioned in Cell map menu 20 If A determination of a location of a new column is then determined responsive to a LeftMouseDown event in Work area 204 as illustrated in Logic block 902b. The column is then added to the main object list as illustrated by Logic block 902c and the display is updated in Logic block 902d. Visual objects and links to the right of the insertion point will be shifted to accommodate the added column. Control then reverts to Logic block 902A. ix. CDeleteColumnMode An object in the CDeleteColumnMode class 903, as shown in Figure 9 is responsible for deleting a Cognition cell column from a cell map. Figure 19 illustrates a control flow diagram for an object in the CDeleteColumnMode class 903 according to the present
invention. As above, a determination is made in Logic block 903 a whether a Delete Column has been selected in Cell map menu lOlf. A determination of a location of the column to be deleted is then determined responsive to a LeftMoveDown event as illustrated in Logic block 903b. The user may be prompted as to whether to continue if marked Cognition cells are in the column to be deleted. The column is then deleted from the main object list as illustrated by Logic block 903c and the display is updated in Logic block 903d. Visual objects and links to the right of the deletion point will be shifted to compensate for the deleted column. Control then reverts to Logic block 903a. x. CAddRowMode An object in the CAddRowMode class 904, as shown in Figure 9 is responsible for adding a ghost row of cells to a cell map. Figure 20 illustrates a control flow diagram for an object in the CAddRowMode class 904 according to the present invention. A determination is made in Logic block 904a whether a Add row has been selected in Cell map menu 20 If. A determination of a location of a new row of ghost cells is then determined in response to a LeftMouseDown event as illustrated in Logic block 904b. The row is then inserted into the main object list as illustrated by Logic block 904c and the display is updated in Logic block 904d. Existing cells and links below the insertion point will be shifted to accommodate the new row of cells. Control then reverts to Logic block 904a. xi. CDeleteRowMode
An object in the CDeleteRowMode class 905, as shown in Figure 9 is responsible for deleting a cell row from a cell map according to the present invention. Figure 21 illustrates a control flow diagram for an object in the CDeleteRowMode class 905 according to the present invention. A determination is made in Logic block 905a whether a Delete Row has been selected in Cell map menu 20 If. A determination of the location of the row to be deleted is then determined in response to a LeftMouseDown event as illustrated in Logic block 905b. The user may be prompted as to whether to continue if marked cells are in the row to be deleted. The row is then deleted from the main object list as illustrated by Logic block 905c and the display is updated in Logic block 905d. Existing cells and links below the deletion point will be shifted to compensate for the deleted row. Control then reverts to Logic block 905a.
xii. CRunMode An object in the CRunMode class 913 is used when executing a cell map. Responsibilities of an object in class 913 include toggling the firing states of selected Causes, and initiating the display of firing values for selected cells and links when running in real number mode.
Figure 22 illustrates a control flow diagram for an object in CRunMode Class 913 according to the present invention. On a LeftButtonUp event logical block 913b determines whether the cursor point is located over a Cause input. If it is, then logical block 913c determines whether the cell map is executing using real number values. If it is not, then logical block 913d simply toggles the firing state of the Cause input. If it is executing real numbers, then logical block 913c queries the user for a real firing value and sets the Cause input state accordingly. If the cursor is not located over a Cause input, then logical block 913f determines whether the cursor is positioned over a cell or a link. If it is, logical block 913g determines whether the cell map is executing using real numbers. If it is, logical block 913h displays the firing value of the targeted object. When the above processing completes, control returns to logical block 913a. 3. Document A Document object in the CDocument class 700 as illustrated in Figure 7 represents an entire cell map. A user interacts with a Document object through a windows application object in the CWinAPP class 701. The relationship between the Document object and the Application object in class 701 is defined by the Microsoft Foundation Class (MFC) Framework.
The responsibilities for a Document object in class 700 also includes storing a list of all elements in the current cell map. The Document object provides a mechanism for saving and loading the cell map information from a storage location. The Document object provides general cut, copy and paste functions as well as providing access to the CVisualObject class 709 for selecting and adding of objects. The Document object also owns CRun Algorithm class 706 for running the cell map and the mode manager object in the CModeManager class 713 as illustrated in Figure 7. The Document object also provides facilities for a CVisualObjectList object to check its compatibility with a specified platform.
4. View
As illustrated in Figure 7, a View object in the CView class 703 is used in the MFC framework which provides facilities for viewing and interacting with a Document object in class 700. A View object in CView class 703 provides message handling facilities for window events, printing facilities and updating functions. The responsibilities of the View object includes storing the type of view which is currently presented, such as a Matrix or Hexagonal view. The View object also provides facilities for changing the view type. The View object controls which elements of the current cell map are shown in the view, such as whether cell names and/or addresses are shown. The View object also provides drawing functions that map between the world-space coordinate systems of the Hexagonal and
Matrix views and the window coordinates. Finally, the View object provides functions for zooming and panning the view. a) Nucleate Formation i. Alpha Nucleate Figure 23 illustrates an Alpha nucleate having seven cells according to the present invention. The center cell Cl is a Cognition cell. The cells are numbered from the center, to the top of the hexagon and then clockwise around the hexagon. The even-numbered cells in an Alpha nucleate are Sensor cells S2, S4, S6 and the odd-numbered cells are Receptor cells R3, R5, R7. ii. Beta Nucleate
Figs. 24A illustrate a Beta nucleate according to the present invention. The Beta nucleate is made up of seven Alpha nucleates A1-A7 in the same hexagonal symmetry as illustrated in Figure 23. In particular 49 cells are used as illustrated in Figure 24B. The center Alpha nucleate Al, however, is a deviation from a standard Alpha nucleate in that all seven cells in the Alpha nucleate are Cognition cells. The Alpha nucleates A2 - A7 are standard Alpha symmetry. Neither the Alpha or Beta nucleate have Motor Response cells. iii. Motor Response nucleate Figure 25 illustrates a Motor Response nucleate according to the present invention. The Motor Response nucleate is very similar to the Alpha nucleate except that all the cells are Motor Response cells. The same hexagonal symmetry is used and the seven cells are number MR1 at the center to MR2 - MR7 clockwise around the hexagon from the top right position.
iv. Gamma nucleate Figs. 26A illustrate a Gamma nucleate according to the present invention. The Gamma nucleate is made up of seven Beta nucleates B1-B7 in a hexagonal symmetry, as shown in Figure 26A. The center Beta nucleate B 1 , however, is a deviation from the standard beta nucleate in that all 49 cells of the center Beta nucleate B 1 are Cognition cells. Beta nucleates B2 - B7 in the Gamma nucleate illustrated in Figure 26 are standard Beta nucleates as described above. The numbering of the cells follows the convention described above. For example, Bl is the center nucleate and Beta nucleates B2 - B7 are numbered clockwise from the top right of the hexagon. The Gamma nucleate also includes 12 motor response nucleates as illustrated in Figure 26B. These are situated as nuclei couplets MR in each corner of the Gamma nucleate. v. Delta nucleate Figs. 27A and 27B illustrate a Delta nucleate according to the present invention. The Delta nucleate, as shown in Figure 27A, utilizes 7 Gamma nucleates (G1-G7) with no deviation. The Gamma nucleates are also arranged in a hexagonal formation as are all the previous nucleates. The Gamma nucleates within a Delta nucleate are numbered Gamma nucleate Gl at the center and Gamma nucleate G2-G7 are numbered clockwise from the top right position. vi. Epsilon nucleate Figure 28 illustrates an Epsilon nucleate according to the present invention. The
Epsilon nucleate includes 7 Delta nucleates as illustrated in Figure 27 and are numbered Delta nucleate Dl at the center and Delta nucleates D2-D7 are numbered clockwise from the top right position. vii. Automated nucleate formations Figure 30 illustrates one method of building nucleate formations. Such a method could be employed in an embodiment to dynamically create nucleate formation displays. First, a base shape is established, as shown by logical block 3010. The base shape may be specified by hardcoding, user input, or an alternative mathod without departing from the spirit of the invention. After the base shape is established, the current level counter is set to one, e.g., the alpha level, as shown by logical block 3012. The first order constellation pattern of cell positions is then created in accordance with the base shape, as shown by logical block 3014. For example, as described earlier, seven cells are arranged within the
hexagonal base shape to form an alpha nucleate in the described embodiment. A determination is then made whether there the total number of cell positions now created is sufficient for the task at hand, as shown by logical block 3016. The number required to achieve sufficiency could be hard-coded, specified as a startup parameter, or provided by the user in response to a query. These and other alternatives may be used without departing from the spirit of the invention.
If it is determined that there are insufficient cell positions at the existing level, the level number is incremented, as shown by logical block 3018. A constellation pattern occupying that level is then created, as shown by logical block 3020. First, a constellation pattern of the previous level is positioned in the display area as an anchor. Then, more constellation patterns of the previous level are positioned about the anchor constellation to fill a field approximating the base shape. The size of the field for the current level may be hard- coded, specified as a startup parameter, programatically determined, or provided by some alternative method without departing from the spirit of the invention. Control then passes back to logical block 3016 to determine if the current level constellation contains a sufficient number of cell positions and the process repeats until sufficient cell positions are created.
One skilled in the art will recognize that many alternatives exist regarding the practice of the above method, some of which have been noted, others of which have not. The invention is not limited to any particular embodiment of alternatives but rather encourages such variation.
5. Addressing a Cell
The addressing of a cell in a Gamma nucleate as illustrated in Figure 29 may be done using a three-digit representation. For example a 677 address represents the sixth cell (sensor) in the seventh Alpha nucleate, in the seventh Beta nucleate in the Gamma nucleate formation illustrated in Figure 29. A 155 address represents the first cell (Cognition) in the fifth Alpha nucleate, in the fifth Beta nucleate. Likewise, the 344 address represents the third cell (Receptor) in the fourth Alpha nucleate, in the fourth Beta nucleate. Finally, a MR 372 address represents the third Motor Response cell in the Motor Response nucleate closest to the seventh Alpha nucleate in the second Beta nucleate in the Gamma nucleate formation illustrated in Figure 29.
Every Receptor, Sensor, Cognition, and Motor response cell has a unique address determined by its position within the nucleate formations of which it is a part, a) Transforming a Matrix view to a Paradigm view Figure 31 is a control flow diagram for transforming a cell map in a Matrix to a cell map in a Paradigm view. In particular, Figure 32 illustrates a programmed cell map in a Matrix view. For example, a Receptor cell connected to Input A has an address of 7321. Figure 33 illustrates a Paradigm view of the cell map illustrated in Figure 32. Figure 33 is a programmed cell map of a Gamma nucleate as illustrated in Figure 29.
The transformation of a Matrix view cell map to a Paradigm view cell map is initiated as illustrated by Logic block 3100 in Figure 31. The first available Beta nucleate having available Cognition cells starting with Beta nucleate B2 is identified as illustrated in Logic block 3101. For example, four Cognition cells are illustrated in Figure 32 and must be mapped to a Beta nucleate having a sufficient number of available Cognition cells in a Gamma nucleate as illustrated in Figure 33. Alternatively, the four Cognition cells may be mapped one at a time to a Beta nucleate having an available Cognition cell position. The Cognition cells in a Matrix cell map view are then mapped to the identified Beta nucleate as illustrated by Logic block 3102. The mapping begins with Cl of the identified Beta nucleate. The closest pair of Receptor/Sensor cells to the mapped Cognition cells in the Paradigm view is then identified in Logic block 3103. The Receptor/Sensor cells are then mapped in Logic block 3104. For example, Figure 33 illustrates how Receptor cell 7321 and Sensor 6321 have been mapped into the Gamma nucleate. The closest Motor Response nucleate is then identified in Logic block 3105. The appropriate Motor Response cells in the identified Motor Response nucleate are obtained by identifying the closest Motor Response cells to the respective mapped Cognition cells as illustrated in Logic block 3106. The closest group of Cause cells to the mapped Cognition cells are then identified in Logic block 3107 and the Cause cells are mapped in Logic block 3108. The closest group of Effective cells to the mapped Motor Response cells are then identified in Logic block 3109 and the Effect cells are mapped in Logic block 3110. The transformation from Matrix view to Paradigm view then ends in Logic block 3111.
6. Run algorithm a) Classes and objects
Figure 34 illustrates the CRunAlgorithm class 706 and related classes. The CRunAlgorithm class 706 is an abstract class defining the interface for any algorithm used to run the cell map. Objects of this class, or those classes inheriting therefrom, are principally responsible to start a run, stop a run, perform a run in an incremental or continuous mode, and respond to changes to objects resulting from operations external to the run itself, e.g., a change in a Cause cell state resulting from a mouse click by the user.
The CRudimentary class 3410 inherits from the CRunAlgorithm class 706 and further provides functions needed to execute the cell map. The CRunRudimentary class 3420 and CStepRudimentary class 3440 inherit from the CRudimentary class 3410.
Classes deriving from the CRunRudimentary class 3420 each provide the functionality to execute the cell map in a continuous mode. In general, these derived classes operate continuously in the sense that once a set of inputs has been processed, the next set of inputs are immediately processed. In general, these derived classes differ from one another by the source of their inputs, i.e. Cause cell activations.
The following classes derive from CRunRudimentary. An object of CRunFast class 3424 obtains inputs from user interaction. An object of CRunTestStructured class 3426 generates its own inputs to systematically fire every possible combination of Cause cell activation patterns. An object of CRunTestAll class 3428 generates its own inputs to fire random patterns of Cause cell activations. An object of CRunBackwards class 3430 generates its own inputs and is later described in detail. An object of CRunBatch 3422 class obtains its inputs by reading a text file. b) CRunBatch Table 2 illustrates the syntax for text files which may be understood by the CRunBatch objects. A line is read from the text file, processed according to its contents, and then the next line is read and processed in like fashion.
Classes deriving from the CStepRudimentary class 3440 each provide the functionality execute the cell map in an incremental mode. An object of CRunStep class 3442 derives
from CStepRudimentary 3440 and executes the cell map once according to the current state of Cause cell states and execution then stops. An object of CRunSlow class 3444 derives from CStepRudimentary 3440 and executes the cell map in the same manner as CRunStep, waits for a predetermined period of generally less than one second, and then repeats this process. As a practical matter, this repetition makes a CRunSlow object appear to run 'continuously' but one input set is not processed immediately after the other.
One of ordinary skill in the art will recognize that other CRunAlgorithm derived classes may be created to accommodate the needs of additional run mode variations. For example, A CRunDataAcquisition class may be created which derives from CRunRudimentary 3410 to capture Cause input state data from a hardware data acquisition card, which samples and digitizes analog signals from the outside world. Such a class may permit advantageous practice of the invention in a manufacturing process control application where analog signals from temperature and pressure probes are a critical source of information. Many such alternatives exist without departing from the spirit of the invention. c) Backtracing
Backtracing is a special and valuable capability existing in the presently described embodiment. Backtracing allows a user to run the cell map backwards, moving from effect to cause, rather than from cause to effect. The backtracing capability is provided via the CRunBackwards 3430 class deriving from CRunRudimentary 3420. It is an especially valuable debugging tool during the cell map development and maintenance process, though its use is not so restricted.
Backtracing starts with the user selecting one or more Effect outputs representing a set or subset of fired output signals occurring together at the end of an execution step A backtracing run is initiated via a main menu selection creating an object in the CRunBackwards class. The CRunBackwards object generates its own Cause inputs, systematically testing every firing combination until one is found which produces a fired output set containing every one of the outputs selected at the time the run was initiated. Any such Cause input set is captured along with the identities of all cells and links in the firing state at the end of the step. More Cause input combinations are tried until all possible combinations have been tested. The run is then stopped.
Figure 35 depicts one possible functional flowchart for the backtracing process. User selection of an Effect output pattern is shown by logical block 3510. The output pattern is stored for later comparison, as shown by logical block 3512. A first possible combination of Cause inputs is then generated, as shown by logical block 3514. A single step of the cell map is then performed using the generated Cause input pattern, as shown by logical block 3512. A determination is made whether the particular set of Cause inputs produces the desired output pattern, as shown by logical block 3514. If the Effect output pattern matches the reference set stored at logical block 3512, a Cause Set is constructed listing the identities of all cell map components in the firing state at the end of the step, as shown by logical block 3516. A next possible Cause input set is generated regardless of the outcome of the previous step, as shown by logical block 3518. A determination is made whether the set of all possible Cause input combinations has been exhausted, as shown by logical block 3520. If not, control is passed back to logical block 3512 to test the new set of Cause inputs. If all combinations have been tested, then the finished Backtrace dialogue box is displayed, as shown by logical block 3522.
A determination is made whether the user desires to terminate the dialogue box, as shown by logical block 3524. If it is determined that the user desires to quit, then the backtracing is complete and exited, as shown by logical block 3526.
A determination is made whether the user has selected a different Cause Set to display, among the number of Cause Sets created by earlier operations of logical block 3516. If the user switches to a different Cause Set, the cell map is reset and all the objects contained in the Cause Set are then set to display in firing state appearance and redrawn, as shown by logical block 3530.
Figure 36 depicts the screen display for a backtracing run. When the run is initiated a Backtrace dialog box 3600 appears. The Backtrace dialog box has a status display area 3610, a Current Cause Set display area 3620, and Total Cause Set display area 3630. The Backtrace Dialog box also contains increment 3641 and decrement 3642 controls associated with the Current Cause Set display area 3620.
At the completion of a backtracing run, the status display area 3610 displays a "Finished" message. The Total Cause Set display area 3630 displays the number of Cause input sets found during the run which produced the selected Effect output pattern. The Current Cause Set display area 3620 indicates which of the discovered Cause Input sets is
currently displayed in the work area 204. The Cause Set number appearing in the Current Cause Set display area 3620 can be changed by the user by direct input or clicking the mouse while the cursor point is positioned over either the increment 3641 or decrement 3640 controls. Objects in the cell map displayed in the work area 204 appear in their run mode coloring (described in detail later) according to their firing state recorded in association with the Cause Input set whose number is displayed in the Current Cause Set display area 3620. III. Cell Map Execution
A. Internal Operation
1. Cell Map Reset
Figure 37 illustrates one possible process flow for the execution of a cell map. When an execution, or run, of the cell map starts, the entire cell map is reset as depicted in logic block 3710 so that all cell map components are in a non-firing state. Most noteworthy is the non-firing initial state of SPS-type outputs and links. An SPS signal is thus not a true complement of the related TPS signal as might be mistakenly presumed.
2. Cause Inputs
Cause inputs may not necessarily be reset along with the rest of the cell map, but may be done so as a convenience. The invention may be advantageously employed where Cause inputs come asynchronously from independent, outside-world sources, beyond the control of the host computer. In such a case, it may be impractical or impossible to reset the Cause inputs and the cell map can execute properly independent of that.
Cause inputs are set (i.e. take on a non-zero value) or reset (i.e. take on a zero value) by events external to the cell map execution process as shown by logical block 3702. If, for example, methods of a CRunSlow class object are used to perform the cell map execution function, then Cause inputs may be set and reset by clicking a mouse button while the cursor appears over the screen image of a Cause input, where methods outside of the CRunSlow class handle the mouse events and update the Cause input object.^
3. Receptor Cell Layer •
A step of the cell map execution begins with all of the Receptor cells logically executing in parallel as represented by logical block 3712. This effectively provides a snapshot of a set of concurrent Cause cell states, easily described in mathematical terms as
an input vector. Each receptor cell latches the signal on its inbound link, originating at a Cause input. The latched input state is then presented to the Receptor cell's outbound link.
4. Sensor Cell Layer
After the Receptor layer is finished processing, all of the Sensor cells logically execute in parallel as represented by logical block 3714. Each Sensor cell evaluates the value presented on its inbound link, originating at a Receptor cell output. Based on the results of the evaluation, the Sensor cell determines the firing states of its outputs and presents them to any links originating at its outputs.
5. Cognition Cell Layer
After the Sensor layer is finished processing, all of the Cognition cells logically execute in parallel as represented by logical block 3716. Each Cognition cell, according to its type and programmed operating parameters, evaluates the values presented on its inbound links. Based on the results of the evaluation, the Cognition cell determines the firing states of its outputs and presents them to any links originating at its outputs. The Cognition cell also records whether any of its output states has changed.
After the Cognition cell layer has completed an evaluation cycle represented by logical block 3716, a check is made to see if any Cognition cell in the cell map changed any output state during the just completed cycle. This check is represented by logical block 3718. If it is determined that a change of state did occur, the processing of the Cognition cell layer 3716 is repeated by following logical path 3732. If it is determined that no change of state occurred during the just completed cycle, the Cognition cell layer has stabilized, and its processing is completed for the current step.
It is noted that individual Cognition cells, uniquely in this embodiment, have the ability to feed back into themselves. This is achieved by originating a link at one of such a cell's weighted outputs and terminating that same link into that same cell's input. In the present embodiment only Basic Cognition cells will permit such a link to be made.
An example of one such configuration appears in Figure 38. There, Basic Cognition cell 3827 may fire back into itself via Link 3870. In this configuration, the Cognition cell acts as a first-time latch, locking itself into the firing state by providing itself with a weighted input having a value at or above the cell's Action potential, once it transitions to the firing state via an input signal on Link 3865 also at or above the cell's Action Potential. In the presently described embodiment, the feedback signal on such a link is available as
an input value both in any subsequent Cognition cell layer processing 3716 in the present step, and at the beginning of the subsequent step. The operational availability of the feedback signal may be different without departing from the spirit of the invention.
6. Timer Cell Layer
After the Cognition cell layer is stabilized, all of the Timer cells logically execute in parallel as represented by logical block 3720. Each Timer cell evaluates the states presented on its inbound links. Based on the results of the evaluation, the Timer cell starts, stops, resets to zero, or updates its relative time counter, and determines the firing states of its outputs in accordance with its programmed operating parameters and the value of its relative time counter.
It is noted that the updating of the relative time counter within a Timer cell may occur synchronously with the stepping of the cell map as described above, or it may occur asynchronously via an independent process.
7. Motor Response Cell Layer
After the Timer cell layer is finished processing, all of the Motor Response cells logically execute in parallel as represented by logical block 3722. Each Motor Response cell evaluates the states presented on its inbound links Based on the results of the evaluation, the Motor Response cell determines the firing states of its outputs and presents them to any links originating at its outputs.
8. Effect Outputs
After the Motor Response cell layer is finished processing, all of the TPS and SPS Effect outputs logically execute in parallel as represented by logical block 3724. Each Effect output cell evaluates the state presented on its inbound link, originating at a Motor Response cell output. Based on the results of the evaluation, the Effect Output presents its output signal.
At the completion of the Effect output processing represented by logical block 3724, the step of the cell map is completed. In continuous modes of cell map execution, a subsequent step is then immediately initiated by following logical path 3730 to the Receptor Cell layer logical block 3712.
Effect outputs resulting from a step may direct their signals to Cause inputs via appropriate links. The linking of an Effect output to a Cause input allows the reasoning
platform to carry state information forward in time from one step to the next allowing a degree of adaptive reasoning.
An example of one such connection appears in Figure 38. There, TPS Effect's 3845 output is linked to Cause input 3806. The link appears in broken form, the entire link represented by partial link 3857B at the link's origin end and partial link 3857A at the link's target end. Any such link between an Effect output and a Cause input can be programmed to either set the Cause or to reset the Cause, when the link is in its fired state. In this embodiment, a single link cannot serve both to set and reset a Cause. Such a link has no effect on the Cause when it goes into, or remains in, its non-fired state. B. Logically Parallel Processing and Temporal Contiguity
It is noted that the process depicted in Figure 37 is idealized in the sense that it calls for parallel processing. True parallel processing is not possible on the overwhelming majority of computers in existence today which have their foundations in the serial instruction processing design of von Neumann architecture; widely known in the art. The speed of a von Neumann serial computer may, however, be adequate to give a reasonable approximation of parallelism.
Because the parallelism on a von Neumann processor is only an approximation, the process used to execute the cell map on a serial computer may deviate from the rigid layer- to-layer processing depicted in Figure 37 while producing the equivalent logical result at the end of each step.
Figure 39 depicts a process for running a cell map on a serial instruction computer. At the beginning of the run the cell map is reset so that every object is in its non-firing state and firing-related values are all set to zero, as shown in logical block 3902. Changes to Cause inputs are accepted to prepare the Cause input pattern desired for the next step of the cell map, as shown in logical block 3907. Logical blocks 3906 through 8818 perform a logical step of the cell map. At the end of the step, control is returned to logical block 3907 to get the Cause inputs for the next cell map step. One skilled in the art will recognize that any well known means may be employed to terminate the execution of the run prrocessing loop.
Figures 40a-c, 41a-c, 42a-g, and 43a-c are representative flowcharts of the processing performed by individual cell map objects during execution. The processing for each object
may be chiefly performed using methods provided within the CCell 857 and CLink 853 base class inheritance structure, although the practice of the invention is not so limited.
Figures 40a-c depict representative flowcharts for cell map objects directed at external input and output signal processing which are not mapped into the nucleate structure. Figure 40a is a representative flowchart for Cause node processing for one embodiment. Figure 40b is a representative flowchart for Timer node processing for one embodiment. Figure 40c is a representative flowchart for Effect node processing for one embodiment.
Figures 41a-c depict representative flowcharts for cell types mapped into the nucleate structure excluding Cognition cells. Figure 41a is a representative flowchart for Receptor cell processing for one embodiment. Figure 41b is a representative flowchart for Sensor cell processing for one embodiment. Figure 41c is a representative flowchart for Motor Response cell processing for one embodiment.
Figures 42a-g depict representative flowcharts for individual types of Cognition cells. Figure 42a is a representative flowchart for Basic Cognition cell processing for one embodiment. Figure 42b is a representative flowchart for Adder Cognition cell processing for one embodiment. Figure 42c is a representative flowchart for Multiplier (Gate) Cognition cell processing for one embodiment. Figure 42d is a representative flowchart for Comparator Cognition cell processing for one embodiment. Figure 42e is a representative flowchart for Divider Cognition cell processing for one embodiment. Figure 42f is a representative flowchart for ID Table Lookup Cognition cell processing for one embodiment. Figure 42g is a representative flowchart for 2D Table Lookup Cognition cell processing for one embodiment.
Figures 43a-c depict representative flowcharts for various types of Links. Figure 43a is a representative flowchart for Regular, TPS, and SPS link processing for one embodiment. Figure 43b is a representative flowchart for Effect-to-Cause link processing for one embodiment. Figure 43c is a representative flowchart for Weighted link processing for one embodiment.
The processing represented by flowcharts for individual cell map objects, such as those depicted in Figures 40a through 43c, would occur in the execution process at logical block 3910.
One of ordinary skill in the art will recognize that Figure 39 depicts only one such possible process for executing a cell map using a serial instruction computer. Alternative
processes may be used without departing from the spirit of the invention. For example, processing could enter the cell map at the cognition cell layer, processing down a list of cognition cells ordered by their input dependencies, and for each cell in the list working backwards from the cell to collect its inputs, and working forward from the cell to provide its outputs. Other alternatives exist with various advantages and disadvantages enlightened by what is well known in the art, and the use of such alternatives does not depart form the spirit of the invention.
It is noted that the desire for parallel processing on the reasoning platform (or its logical equivalent or approximation) is not just for speed, but for embueing the platform with temporal contiguity. Temporal contiguity allows the reasoning platform to perform human-like reasoning. Temporal contiguity implies that all data that is relevant is immediately available when it is relevant, and that data that is not related or relevant (e.g., late data, early data) does not interfere into the evaluation and processing of the set of relevant data items. In addition to their contribution to adaptive reasoning, Last Update links and Effect-to-Cause links also may be employed in a cell map to achieve temporal contiguity.
C. Visualization
In the present embodiment, the operational state of a cell map can be visualized on the host computer's output device while the cell map is running. The visual representations of the cell map components appearing on a video monitor change their visual characteristics coincident with a change in the underlying object's state. The present embodiment utilizes the visual characteristic of color to reflect state.
A cell in its unfired state appears in the normal view color for that particular type of cell. A link in its unfired state appears light green in color regardless of its normal view color. A Cell in its firing state appears magenta (the same as the color used for selected items in development mode in the present embodiment).
A Link in its firing state appears red, except for inhibitory (negative value) weighted links which appear as dark blue when in firing state. Thus, firing excitory links appear red, and firing inhibitory links (only weighted links can be inhibitory in this embodiment) appear dark blue. Table 3 depicts the color scheme used in the present embodiment.
In the present embodiment, an executable object initiates a redraw of itself to the screen if its state changes, each and every time the object cycles during the run of the cell map. Alternatively, all objects, or all changed objects, could be redrawn at the end of each step. Yet another alternative would be to have a separate process running asynchronously to update the screen image, responding to each state change resulting from the execution process, or selecting some subset of state changes to display. These and other variations are possible without departing from the spirit of the invention.
Other color schemes are, of course, also possible without departing from the spirit of the invention. One alternative would give the user the option to define an object's state- related colors so that a user could, for example, identify all objects related in some meaningful way by a specific firing color.
Execution can be visualized using either the matrix or the hexagonal (paradigm) view in this embodiment. Figure 5 depicts a cell map in matrix view while the cell map is not running. Cause inputs 500, 520, 526 appear gray. Receptor cells 502, 522, 528 appear dark green. Sensor cells 504, 522, 528 appear light green. Cognition cells 514, 516, 532 appear red. Motor Response cells 506, 508, 534 appear blue. TPS Effect outputs 510, 512, 536 appear yellow. Regular Links 540, 544, 542, 546, 596, 562, 598 appear black. Weighted links 548-558, 570, 582-590 appear red. TPS Links 560, 592, 594 appear black.
Figure 44 depicts a cell map in matrix view at one point during its execution corresponding to the cell map in Figure 5. Figure 44 depicts the cell map at the end of a run step in which two Cause inputs 520, 526 were in firing state, and one Cause input 500 was in unfired state at the beginning of the step. Having processed in accordance with the description of cell map objects and execution processing detailed above, at the end of the step the visual representation of the cell map exhibits the following characteristics. Cells 500-512 are in the unfired state and appear in the same respective color as attributed in the discussion regarding Figure 5. Cells 520-536 are in the fired state and all appear magenta. Links 540-562 are in the firing state and all appear red. Link 570 is also in the firing state but appears blue because of its inhibitory nature. Links 580-598 are in the unfired state and appear light green.
Figure 45 depicts a cell map in hexagonal (paradigm) view with ghost cells. Figure 45 depicts the same cell map depicted in Figure 44, and in the same state, but in a different view.
Figure 46 depicts a zoomed-in portion of a cell map in hexagonal view with ghost cells. Figure 46 depicts an enlarged view of an area 4500 of the cell map view depicted in Figure 45.
Figure 47 depicts a zoomed-in portion of a cell map in hexagonal view without ghost cells. Figure 46 depicts the same view area as Figure 46 but with unused (ghost) cells removed for clarity. The cell map is in the same state as that of Figure 44, only the view has changed: Cells 502-512 are in the unfired state and appear in the same respective color as attributed in the discussion regarding Figure 5; Cells 522, 524, 528-536 are in the fired state and all appear magenta; Links 540-562 are in the firing state and all appear red. Link 570 is also in the firing state but appears blue because of its inhibitory nature; Links 580- 598 are in the unfired state and appear light green.
One advantage of the present embodiment is the capability to visualize the execution and operation of a cell map. This may occur in either matrix or hexagonal (paradigm) view. Matrix view is employed advantageously for cell maps using a very small number of executable objects, or for focusing on small groups of cells during debugging. Paradigm view is advantageously employed when viewing the cell maps of reasonably sized applications.
The ability to view the operation of the cell map allows the user rapid insight not generally attainable with other technologies. An experienced user of a given cell map application may begin to recognize operational patterns and can anticipate a likely outcomes before a definitive output is produced due to borderline input sets or slowly changing input patterns.
Moreover, the visual nature of the communication to the user is very high bandwidth i.e., a tremendous amount of information can be conveyed in a very short period of time, respecting the time-honored adage that "a picture is worth a thousand words." With other technologies, internal program operation is generally hidden from the user and the application developer must specifically program for any information that is to be displayed.
As apparent from the discussion above, the present invention is advantageous because it allows a user to create a cell map and monitor its operation in an object-specific, highly visual way. For example, by utilizing the invention to develop an automated inspection device, a user can deploy cell map visualization as part of the end product so that
inspection device users may, by regular exposure learn to see patterns of processing. This may enable the device users to recognize likely outcomes in borderline situations before a definitive determination can be made by the machine, saving time and money.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. For example, the number of available inputs and outputs for a given cell type may vary from one embodiment to another. Similarly, the source of inputs to the Causes may be delivered over a telecommunications link in an embodiment. Or, the Timer calls could run as a process indpendent of the call map stepping process.. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.