Method and system for pattern analysis using a coarsecoded neural network
Download PDFInfo
 Publication number
 US5333210A US5333210A US07908141 US90814192A US5333210A US 5333210 A US5333210 A US 5333210A US 07908141 US07908141 US 07908141 US 90814192 A US90814192 A US 90814192A US 5333210 A US5333210 A US 5333210A
 Authority
 US
 Grant status
 Grant
 Patent type
 Prior art keywords
 pattern
 sub
 coarse
 network
 input
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Expired  Fee Related
Links
Images
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/62—Methods or arrangements for recognition using electronic means
 G06K9/64—Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix
 G06K9/66—Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
Abstract
Description
The invention described herein was made by employees of the U.S. government and may be manufactured and used by or for the government without the payment of any royalties thereon or therefor.
The present invention is directed to methods and systems for pattern analysis using neural networks and, more particularly, to methods and systems for pattern analysis using neural networks having an increased resolution input field with less network interconnections.
Various techniques have been applied to the problem of distinguishing between a set of patterns invariant to changes in the position, size or angular orientation of the patterns. These techniques include statistical, symbolic, optical and neural network techniques.
The statistical, symbolic, and optical techniques are based on a twostep process of feature extraction followed by classification. For the feature extraction step, the system designer is required to specify a set of attributes capable of separating a set of training patterns into subgroups containing all distorted (i.e., translated, scaled and/or inplane rotated) views of each distinct pattern. The system then organizes these features and uses them to classify incoming patterns.
There are at least three major disadvantages of these twostep approaches:
(1) It is not always obvious which features are sufficient for separating the set of training patterns such that all distorted views of a pattern will be classified as belonging to the same group.
(2) These approaches require a fairly large, if not exhaustive, set of training patterns to correctly organize the features such that novel views of the patterns will be correctly classified.
(3) The training time increases as the number of features and the training set size increase. Thus, these systems tend to be very slow.
A different approach to the problem of distortion invariant pattern recognition uses neural networks. Unlike the methods discussed above, in the neural network approach, the system is provided only with a set of distorted views of a set of distinct patterns (i.e., a set of translated, scaled, and/or inplane rotated views of each distinct pattern) and, through training, learns what the relevant features are as well as how to distinguish between the distinct patterns.
Multilayer, firstorder neural networks using the backward error propagation (backprop) algorithm for training have been shown to be effective for distortion invariant pattern recognition. Using this method, the neural network is provided with a large set of distorted views of a set of patterns. The neural network weights are then adjusted using the back propagation learning rule such that the neural network correctly classifies a specified percentage of the training set patterns. The major disadvantages of this system are:
(1) The training set needs to be large enough and fairly indicative of the expected distortions so that the neural network can generalize rather than memorize what features to look for.
(2) The training time increases with the size of the training set and thus these systems are also fairly slow.
Furthermore, these first order neural networks achieve only 80%90% recognition accuracy.
Progress in higherorder neural networks (HONNs) has been more promising. Reid et al. (M. B. Reid, L. Spirkovska, and E. Ochoa, "Simultaneous Position, Scale, and Rotation Invariant Pattern Classification Using ThirdOrder Neural Networks", Int. J. of Neural Networks, 1, 1989, pp. 154159; and M. B. Reid, L. Spirkovska, and E. Ochoa, "Rapid Training of HigherOrder Neural Networks for Invariant Pattern Recognition", Proc. of Joint Int. Conf. on Neural Networks, Wash., D.C., Jun. 1822, 1989, vol. 1, pp. 689692, the disclosures of which are incorporated herein by reference in their entireties) have demonstrated that a thirdorder neural network is capable of achieving 100% accuracy in distinguishing between two patterns in a 9×9 pixel input field regardless of position, scale or inplane rotation changes. The network needed to be trained on only one view of each object, and required only 10 to 20 passes to learn to distinguish between the objects in any inplane rotational orientation, scale, or translated position. Thus, for pattern recognition, HONNs are superior to multilayered firstorder backprop trained networks in terms of training time, training set size and accuracy.
As an example, the use of a HONN for recognizing twodimensional views of objects will first be discussed. FIG. 1A is a view of an object 20 (the space shuttle orbiter) in a twodimensional input field 30. FIG. 1B is a view of object 20 after it has been translated across input field 30. FIG. 1C is a view of object 20 after it has been reduced in size (scaled) in input field 30. FIG. 1D is a view of object 20 after it has been rotated inplane in input field 30. The output of an output node, denoted by y_{i}, for output node i in a general HONN is given by:
y.sub.i =Θ(Σ.sub.j w.sub.ij x.sub.j +Σ.sub.j Σ.sub.k w.sub.ijk x.sub.j x.sub.k +Σ.sub.j Σ.sub.k Σ.sub.l w.sub.ijkl x.sub.j x.sub.k x.sub.l +. . . ) (1)
where Θ(f) is a nonlinear threshold function such as, for example, the hard limiting transfer function given by:
y.sub.i =1, if f>0, (2)
y.sub.i =0, otherwise;
the lower case x's are the excitation values of the input nodes; and the interconnection matrix elements, w, determine the weight that each input is given in the summation.
Using information about relationships expected between the input nodes under various distortions, the interconnection weights can be constrained such that invariance to given distortions is built directly into the network architecture. See Giles et al. (G. L. Giles and T. Maxwell, "Learning, Invariances, and Generalization in HighOrder Neural Networks", Applied Optics, 26, 1987, pp. 49724978; and G. L. Giles, R. D. Griffin and T. Maxwell, "Encoding Geometric Invariances in HigherOrder Neural Networks", Neural Information Processing Systems, American Institute of Physics Conference Proceedings, 1988, pp. 301309, the disclosures of which are incorporated herein by reference in their entireties) for a discussion of building invariance into HONNs.
As an example, in a secondorder neural network 40 as illustrated in FIG. 2, the inputs (x_{1} x_{4}) are first combined in pairs at product points 42 (denoted by an X) to determine intermediate values, the intermediate values are weighted and summed at summation point 44, and then the output from output node y_{i} is determined from the weighted sum of these intermediate values (i.e., the value determined at summation point 44) by applying the threshold function to the value determined at summation point 44. In accordance with equation (1) above, the output for a strictly secondorder network is given by the function:
y.sub.i =Θ (Σ.sub.j Σ.sub.k w.sub.ijk x.sub.j x.sub.k). (3)
The invariances achieved using this architecture depend on the constraints placed on the weights.
In an example, each pair of input pixels combined in a secondorder network define a line with a certain slope. As shown in FIGS. 3A and 3B, when an object 21 is moved (translated) or scaled in an input field 30, the two points in the same relative positions within the object still form the end points of a line having the same slope. Thus, provided that all pairs of points which define the same slope are connected to the output node using the same weight, the network will be invariant to distortions in scale and translation. In particular, for two pairs of pixels (j, k) and (l, m), with coordinates (x_{j}, y_{j}), (x_{k}, y_{k}), (x_{l}, y_{l}), and (x_{m}, y_{m}) respectively, the weights are constrained according to:
w.sub.ijk =w.sub.ilm, if (y.sub.k y.sub.j)/ (x.sub.k x.sub.j)=(y.sub.m y.sub.l)/(x.sub.m x.sub.l). (4)
Alternatively, the pair of points combined in a secondorder network may define a distance. As shown in FIGS. 4A and 4B, when an object 22 is moved (translated) across input field 30 or rotated within a plane, the distance between a pair of points in the same relative positions on the object does not change. Thus, as long as all pairs of points which are separated by equal distances are connected to the output with the same weight, the network will be invariant to translation and inplane rotation distortions. The weights for this set of invariances are constrained according to:
w.sub.ijk =w.sub.ilm, if d.sub.jk =d.sub.lm . (5)
That is, the magnitude of the vector defined by pixels j and k (d_{jk}) is equal to the magnitude of the vector defined by pixels l and m (d_{lm}).
Thus, when invariance to translation and scale (without invariance to rotation) or to translation and rotation (without invariance to scale) is desired, a second order neural network is appropriate.
To achieve invariance to translation, scale, and inplane rotation simultaneously, a third order neural network 60, as shown in FIG. 5, can be used. The third order neural network 60 illustrated in FIG. 5 includes input nodes x_{1} x_{4}, connected in triplets to product points 62 (which are similar to product points 42 in the secondorder neural network of FIG. 2 except that the excitation values of three input nodes are multiplied thereat), where intermediate values are determined. The intermediate values determined at product points 62 are weighted and summed at summation point 64, and the summation is supplied to a single output node y_{i}.
The output for a strictly thirdorder neural network shown in FIG. 5, in accordance with equation (1) is given by the function:
y.sub.i =Θ (Σ.sub.j Σ.sub.k Σ.sub.l w.sub.ijkl x.sub.j x.sub.k x.sub.l). (6)
That is, when the input field 30 is a matrix of pixels, as is commonly used for object recognition, all sets of input pixel triplets in object 24 are used to form triangles having included angles (α, β, γ). As shown in FIGS. 6A and 6B, when object 24 is translated, scaled, or rotated inplane, the three points in the same relative positions on the object 24 still form the included angles (α, β, γ). In order to achieve invariance to all three distortions, all sets of triplets forming similar triangles are connected to the output node of the neural network with the same weight. That is, the weight for the triplet of inputs (j, k, l) is constrained to be a function of the associated included angles (α, β, γ) such that all elements of the alternating group on three elements are equal:
w.sub.ijkl =w.sub.(i,α,β,γ) =w.sub.(i,β,γ, α) =w.sub.(i,γ,α,β). (7)
Note that the order of the angles matters, but not which angle is measured first.
Because HONNs are capable of providing nonlinear separation using only a single layer, once invariances are incorporated into the architecture, the neural network can be trained (i.e., values assigned to the weights) using a simple rule of the form:
Δw.sub.ijk =(t.sub.iy.sub.i) x.sub.j x.sub.k, (8)
for a secondorder neural network, or
Δw.sub.ijkl =(t.sub.i y.sub.i) x.sub.j x.sub.k x.sub.l, (9)
for a thirdorder neural network, where the expected training output, t, the actual output, y, and the inputs, x, are all binary. Prior to training, the weights, w, can be set to 0, or some other random number.
Second and third order neural networks as described above are disclosed in the above incorporated references of Reid et al.
The main advantage of building invariance to geometric distortions directly into the architecture of the HONN is that the network is forced to treat all distorted views of an object as the same object. Distortion invariance is achieved before any input vectors (training patterns) are presented to the network. Thus, the network needs to learn to distinguish between just one view of each object, not numerous distorted views of each object.
While building invariances into the network greatly reduces the number of independent weights which must be learned, some storage must still be used to associate each triplet of inputs to a set of included angles.
A disadvantage of HONNs is that as their order and the number of input nodes increases, the number of interconnections required (i.e., interconnections between the input nodes, x_{1n} and the product points 42 or 62) becomes excessive. For example, a network with M inputs and one output using rth order terms requires Mchooser interconnections. For higher orders, this number, which is on the order of M^{r} is clearly excessive.
In the field of twodimensional object recognition, for example, wherein an N×N pixel input field is used, combinations of three pixels (i.e., in a third order neural network) can be chosen in N^{2} choose3 ways. Thus, for a 9×9 pixel input field, the number of possible triplet combinations (for a thirdorder neural network) is 81choose3 or 85,320. Increasing the resolution to 128×128 pixels increases the number of possible interconnections to 128^{2} choose3 or 7.3×10^{11}, a number too great to store on most machines. For example, on a Sun 3/60 with 30 MB of swap space, a maximum of 5.6 million (integer) interconnections can be stored, limiting the input field size for fully connected thirdorder neural networks to about 18×18 pixels. Furthermore, the number of interconnections required to fully connect a 128×128 pixel input field (about 10^{12}) is far too large to allow a parallel implementation in any hardware technology that will be commonly available in the foreseeable future.
Spirkovska et al. (L. Spirkovska and M. B. Reid, "Connectivity Strategies for HigherOrder Neural Networks Applied to Pattern Recognition", Int. Joint Conf. on Neural Networks, June, 1990, Vol. 1, pp. 2126, the disclosure of which is incorporated herein by reference in its entirety) discusses techniques for reducing the number of interconnections in a HONN, so that the number of input nodes can be increased. In particular, regional connectivity was evaluated, in which triplets of pixels are connected to the output node only if the distances between all of the pixels comprising the triplet fell within a set of preselected regions. Using this strategy, the input field size was increased to 64×64 while still retaining many of the advantages shown previously, such as a small number of training passes, training on only one view of each object, and successful recognition invariant to inplane rotation and translation.
However, using regional connectivity, images invariant to changes in scale could not be recognized. Also, as the input field size increased, the amount of time for each pass on a sequential machine increased dramatically. The 64×64 pixel input field network required on the order of days on a Sun 3/60 to learn to distinguish between two objects. This is despite the fact that the number of interconnections was greatly reduced from the fully connected version. The number of logical comparisons required to determine whether the distances between pixels fall within the preselected regions was still huge.
An object of the present invention is to provide methods and systems for pattern analysis using neural networks having high resolution input fields.
Another object of the present invention is to reduce the number of interconnections required in a neural network having a high resolution input field.
To achieve the foregoing and other objects and advantages, and to overcome the shortcomings discussed above, a pattern analysis system and method which use a neural network coarsecode pattern to be analyzed so as to form a plurality of subpatterns represented as respective sets of subpattern data. The plurality of subpatterns are formed by overlaying a plurality of offset overlapping coarsecode fields, comprised of coarsecode units having a predetermined size, over the pattern so as to represent an input pattern comprised of a matrix of units having a greater number and smaller size than the number and size of the units in the coarsecode fields. Input values are assigned to each of the coarsecode units in the coarsecode fields in accordance with the input pattern over which the coarsecode fields are overlaid. That is, for example, a unit is turned ON if it overlies part of the pattern; otherwise the unit remains OFF.
The neural network includes a plurality of fields, equal in number to the plurality of subpatterns, so that each field corresponds to one of the subpatterns. Each field includes a plurality of input nodes, and at least one summation point where weighted products of predetermined combinations of the input nodes are summed so as to determine a subpattern value for each summation point. The neural network also includes at least one output node, coupled to corresponding summation points from a plurality of the fields, which performs a threshold function on a pattern value received at that output node to provide an output signal.
The input nodes from each field receive the subpattern data of the subpattern which corresponds to that field. Each field of the neural network then summarizes the weighted products of the predetermined combinations of its input nodes to determine a subpattern value at each summation point for each field. The subpattern values at the corresponding summation points from a plurality of fields are then summarized to produce a pattern value, which is supplied to the output node coupled to these corresponding summation points. The output node then performs its threshold function on the pattern value received thereat to produce an output signal. The output signal from the output node is used to classify the pattern.
When the pattern analysis system and method are used to recognize patterns, the system is first trained by supplying subpatterns from one or more coarsecoded training patterns to the input nodes of the neural network for subsequent evaluation by the neural network. The values of the weights for the products of predetermined combinations of input nodes are then assigned so that a unique output signal will be produced by the output node(s) of the neural network for each training pattern. A pattern to be tested is then coarsecoded, and the subpatterns representative of the test pattern are received by the input nodes of respective fields of the trained neural network. Based on the output signal(s) produced for the test pattern, a determination as to which of the plurality of training images corresponds to the test pattern can be made.
The present invention is particularly useful with HONNs in that the number of input nodes in each field of the neural network is equal to the number of units in each corresponding subpattern. Accordingly, since the number of neural network interconnections is related to the number of input nodes in each field, which number is much less than the total number of units in the high resolution input pattern formed by the plurality of offset overlapping coarsecode fields, data representative of the high resolution input pattern is provided without the combinatoric explosion of interconnections that would be associated with the high resolution input pattern without coarsecoding. That is, a large input field is broken into a plurality of smaller fields, each of which can be analyzed by the neural network.
The invention will be described in detail with reference to the following drawings in which like reference numerals refer to like elements and wherein:
FIGS. 1A1D are views of an object and distortions of the object in an input field;
FIG. 2 illustrates a secondorder neural network;
FIGS. 3A and 3B are views of an object and of a translated, scaled view of the object;
FIGS. 4A and 4B are views of an object and of a translated, rotated view of the object;
FIG. 5 illustrates a thirdorder neural network;
FIGS. 6A and 6B are views of an object and of a translated, scaled, rotated view of the object;
FIG. 7A illustrates an input field containing two ON pixels;
FIG. 7B illustrates two coarsecode fields which are offset and overlaid to form the higher resolution input field of FIG. 7A;
FIG. 8A illustrates an 8×8 input field containing a pattern in the shape of a T;
FIG. 8B illustrates two 4×4 coarsecode fields which can be used to coarsecode the 8×8 field of FIG. 8A;
FIG. 9A illustrates an 8×8 input field containing a pattern in the shape of a C;
FIG. 9B illustrates four 2×2 coarsecode fields which can be used to coarsecode the 8×8 input field of FIG. 9A;
FIG. 10 illustrates the lower resolution subpatterns formed when the coarsecode fields of FIG. 8B are used to coarse code the patterns illustrated in FIGS. 8A and 9A;
FIG. 11 illustrates a thirdorder neural network having two fields and a single output node which can be used to analyze a coarsecoded pattern in accordance with the present invention;
FIG. 12 is a block diagram of an automated tool selection system to which the present invention can be applied;
FIG. 13 is a flowchart illustrating a training procedure for use with a HONN according to the present invention;
FIG. 14 is a flowchart illustrating a testing procedure for use with a HONN according to the present invention; and
FIGS. 15A and 15B are patterns of aircraft which can be recognized using a HONN in accordance with the present invention.
The references to Reid et al., Spirkovska et al. and Giles et al., discussed above, are incorporated herein by reference. These references disclose neural networks, including HONNs of the second and third order, which can be used (with modifications to be discussed below) in the present invention. Although the present invention is particularly suited for HONNs because it is in HONNs that the explosion of interconnections is most extreme, the present invention has use in other neural networks, and especially in neural networks where the number of input nodes and network interconnections are such that the memory of the hardware used therewith becomes taxed. Accordingly, while specific examples involving HONNs will be discussed, these examples are not meant to be limiting.
As used herein, the terminology "subpattern data" refers to data (usually binary in form) which is organized in sets, such as, for example, matrices. The sets of subpattern data can be square, or can have different dimensions in the x, y (and possibly z) directions. Additionally, while a cartesian coordinate system is used in the examples, it is also known, and thus possible, to use a polar coordinate system to define patterns for use by neural networks.
Pattern data can be used to represent objects, characters and other visible items (in which case the pattern data is also referred to as "pixel data"), and further can comprise nonvisible items such as, for example, voice data, or other information.
As is known, neural networks can be used to perform a variety of different types of analyses on pattern data. One type of analysis, described in the above references and in the following description, is pattern recognition. Other types of analysis include, for example, classification of patterndata and determining relationships between sets of pattern data.
An example of the manner in which the present invention can be applied to image patterns will now be described.
In accordance with this illustrative use of the present invention, an image pattern is coarsecoded to form a plurality of subpatterns represented as sets of subpattern data (pixel data), and then each set of subpattern data is supplied to a corresponding field of the neural network. The output node(s) of the neural network then perform(s) a threshold function such as, for example, the hard limiting transfer function described above in equation (1) on a summation of the values determined for all fields in the network, instead of on each field individually.
Coarse coding of the pattern results in a plurality of sets of subpattern data representing subpatterns of the original pattern, each subpattern having a resolution less than that of the pattern represented by all of said subpatterns combined. Accordingly, a neural network having small fields (optimally, all having the same architecture) can be used to receive the subpattern data from each subpattern. Accordingly, the number of interconnections is reduced even when the pattern represented by all of the subpattern data has a high resolution.
The coarse coding procedure used in the present invention involves overlaying fields (coarsecode fields) of coarser units (in this image recognition example the units correspond to pixels) in order to represent an input field comprised of smaller pixels, as shown in FIGS. 7A and 7B. FIG. 7A shows an input field 50 of size 10×10 pixels. FIG. 7B shows two offset but overlapping coarsecode fields 52, 54, each of size 5×5 coarse pixels. In this case, each coarsecode field 52, 54 is comprised of pixels which are twice as large (in both dimensions) as in FIG. 7A. To reference an input pixel using the two coarsecode fields requires two sets of coordinates. For example, pixel (x=7, y=6) on the original image of FIG. 7A would be referenced as the set of coarse pixels ((x=D, y=C) & (x=III, y=III)) in FIG. 7B, assuming a coordinate system of (A, B, C, D, E) for coarsecode field 52 and (I, II, III, IV, V) for coarsecode field 54. This is a onetoone transformation. That is, each pixel on the original image can be represented by a unique set of coarse pixels.
The above transformation of an image (pattern) to a set of smaller images (subpatterns) can be used to greatly increase the resolution possible in a neural network, especially in a HONN. For example, a fullyconnected thirdorder neural network for a 10×10 pixel input field requires 10^{2} choose3 or 161,700 interconnections. Using two fields of 5×5 coarse pixels requires just 5^{2} choose3 or 2300 interconnections, accessed once for each coarsecode field. The number of required interconnections is reduced by a factor of about 70. For a larger input field, the savings are even greater. For example, with a 100×100 pixel input field, a fully connected thirdorder neural network requires 1.6×10^{11} interconnections. If this field is represented as 10 fields of 10×10 coarse pixels, only 161,700 interconnections are necessary. The number of interconnections is decreased by a factor of about 100,000.
One aspect of coarse coding which needs to be addressed is how the part of the image which is not intersected by all coarsecode fields is handled. That is, how is pixel (1, 5) in the original image shown in FIG. 7A represented using the two coarsecode fields 52, 54 in FIG. 7B. There are at least two ways to implement coarse coding: (1) with wrap around; or (2) by using only the intersection of the fields. If coarse coding is implemented using wrap around, pixel (1, 5) could be represented as the set of coarse pixels ((A, C) & (V, II)). Alternatively, if coarse coding is implemented as the intersection of the coarsecode fields, the two coarsecode fields 52, 54 shown in FIG. 7B would be able to uniquely describe an input field of 9×9 pixels, not 10×10.
Using wrap around, the relationship between the number of coarsecode fields (n), input field size (IFS), and coarsecode field size (CFS) in each dimension is given by:
IFS=(CFS * n) (10)
On the other hand, using the intersection of fields implementation, the relationship between number of coarsecode fields, input field size, and coarsecode field size in each dimension is given by:
IFS=(CFS * n)(n1). (11)
The effect of input field size, IFS, is not significantly different with either implementation for small n. As discussed above, coding an image as a set of coarser images greatly increases the size of the input field possible in, for example, a higherorder neural network.
As an example of how coarse coding can be applied to HONNs, refer to FIGS. 8A11. In order to train the neural network to distinguish between a "T" and a "C" in an 8×8 pixel input field 65, a neural network could be trained on the two images shown in FIGS. 8A and 9A directly, or by applying coarse coding. If, for example, a second or third order neural network were trained on the 8×8 input fields illustrated in FIGS. 8A and 9A, HONNs having an architecture similar to that shown in FIGS. 2 and 5 could be used. However, these HONNs would require 64 (8^{2}) input nodes and the appropriate number of interconnections to represent all possible pairs or triplets of pixel combinations.
With coarse coding implemented using wrap around, as explained above, there are two possible combinations which will provide an effective input field of 8×8 pixels: two coarsecode fields 67a, 67b of 4×4 coarse pixels, illustrated in FIG. 8B, or four coarsecode fields 69a69d of 2×2 coarse pixels, illustrated in FIG. 9B.
In the present example, the coarsecode fields 67a and 67b illustrated in FIG. 8B are used. Applying coarse coding by using two coarsecode fields of 4×4 coarse pixels, as illustrated in FIG. 8B, the two images shown in FIGS. 8A and 9A are transformed into the four images T_{1}, T_{2} and C_{1}, C_{2} shown in FIG. 10.
Note that the subpatterns when combined do not form the actual original image. The subpatterns are coarsecoded representations of an image. This is because an entire pixel in a coarsecode field is turned ON even if only a portion of that pixel overlies an ON portion of the original image. However, the combination of subpatterns for each original image is distinct for its respective image, and therefore, can be used to distinguish between different images.
The subpattern defined by each coarsecode field can be represented as sets of subpattern data such as by the following vectors:
T_{1} : (0000000001100010)
T_{2} : (0000011001000000)
C_{1} : (0000000000100010)
C_{2} : (0000011001100000)
Training of the network then proceeds in the usual way (described in more detail below), with one modification: the transfer function thresholds the value obtained from summing the weighted products (triangles in the illustrative thirdorder neural network) over all coarse images associated with each training object. That is,
y=1, if {Σ.sub.n (Σ.sub.j Σ.sub.k Σ.sub.l w.sub.jkl x.sub.j x.sub.k x.sub.l)}>0, y=0, otherwise, (12)
where j, k and l range from 1 to the coarsecode field size squared (which in the above example would be 16), n ranges from 1 to the number of coarse fields, the x's represent coarse pixel values, and w_{jkl} represents the weights associated with the triplet of inputs (j, k, l).
The architecture of the network is illustrated in FIG. 11. The neural network 70 of FIG. 11 is a thirdorder neural network, somewhat similar to the neural network of FIG. 5. The neural network of FIG. 11 differs from that of FIG. 5, in that the FIG. 11 neural network includes a plurality of fields (60a, 60b) the summation points (64a, 64b) of which are attached to output node y_{i}. This is in accordance with the relationship defined by equation (12). Each coarsecode field (containing the subpattern data) is associated with a corresponding one of the fields 60a, 60b, etc. Thus, for the two coarsecode fields 67a and 67b shown in FIG. 8B, the neural network would be provided with two fields, each field having 16 input nodes (x_{1a} x_{16a} for one field and x_{1b} x_{16b} for the other field). The neural network of FIG. 11 illustrates the first four input nodes for both fields 60a and 60b needed to receive inputs from coarsecode fields 67a and 67b.
In order to train the network, the values (vectors containing 1s and 0s) of patterns T_{1} and T_{2} are supplied to the input nodes of fields 60a and 60b respectively, and are associated with an output node signal of, for example 1. The same is done with values of patterns C_{1} and C_{2}, except this combination of values are associated with an output node signal of, for example 0. Initially the weights, w, are all initialized to some random number such as, for example, 0. Then the network trains itself (i.e., assigns values to the respective weights using, for example, equation (9) with the constraints of equation (7) until equation (12) is satisfied).
Within each field of the neural network, the excitation value received by each triplet (or pair in a second order neural network) are multiplied together to form an intermediate value at product points 62a, 62b. Then the intermediate values obtained at the product points for each respective field are weighted and summed to produce a subpattern value at the summation point 64a, 64b of each field. These subpattern values are summed at output node y_{1} to produce a pattern value, and then the transfer function is applied to the pattern value by output node y_{1} to produce an output signal (1 or 0 in the present example).
During testing, an input image is again transformed into a set of coarse subpatterns. Each of these coarser subpatterns, represented, for example, as vectors are then presented to the network and the output value is determined using, for example, equation (12). The input pattern is recognized as the training pattern to which its output signal corresponds.
When each coarsecode field has the same size, as illustrated in FIGS. 7B, 8B and 9B, the architecture of each field (60a, 60b . . . ) in the neural network is the same. Moreover, the weights assigned to each weighted interconnection across all fields is the same for all corresponding weighted interconnections For example, in FIG. 11, the value (w_{ijkl}(a)) of each weighted interconnection in field 60a is equal to the value (w_{ijkl}(b)) of each weighted interconnection in field 60b, for all similar values of i, j, k and l. This further reduces the number of interconnections which must be stored in memory. Thus, optimally, only a single field architecture needs to be stored, and is provided with the subpattern data from each coarsecode field.
If coarsecode fields having different sizes are used, all the neural network fields would not have the same architecture and would have to be separately stored. However this choice depends on the particular problem being addressed, and the network designer.
Additionally, as is known, when more than two distinct patterns are to be recognized, more than one output node y_{i} is usually required. For example, neural networks having two output nodes (and using a threshold function that outputs either a 1 or a 0) can be used to distinguish between four patterns by combining the binary output of each output node to represent four different values such as (00, 01, 10, 11). In this case, referring to the example where similarly sized coarsecode fields are used, each neural network field would have a similar architecture including a plurality of summation points 64, each corresponding to a respective output node y_{i}. The plurality of fields would be combined (this can be visualized by stacking the fields on top of each other as is done in FIG. 11) so that each summation point in each field corresponds to a summation point in each of the other fields, defining a set of common summation points. (For example, summation points 64a and 64b define a set of common summation points.) Each set of common summation points is associated with an output node, which performs an appropriate threshold function on the pattern value (determined by summing the subpattern values of the summation points in the set) received by the output node.
One illustrative embodiment of the present invention will now be described. In this embodiment, the present invention is applied to an automated tool selection system. FIG. 12 is a block diagram of the automated tool selection system. FIG. 13 is a flow chart of the training procedure performed by the automated tool selecting system. FIG. 14 is a flowchart of the testing procedure performed by the automated tool selection system of FIG. 12.
The illustrative embodiment, shown in FIG. 12, illustrates how the present invention can be applied to the common robotics manufacturing task of "bin picking". This system includes a robot 80 having a camera 82 mounted on an arm thereof so as to observe the work space 84 below it. Work space 84 comprises a bin containing tools to be identified, with the tools being individually located in a bin space within the work area. Robot 80 is directed to look at each bin space in the work area and to identify the tool located there. The tool could be located at any location within the bin, and could be rotated inplane. Additionally, the camera height is not held constant so the tools could vary in apparent size. Accordingly, a thirdorder neural network is appropriate. When the desired tool is found, the user is notified and a grappling operation is initiated.
The robot is controlled via a communications link 86 by a computer 90. Computer 90 includes a mouse 92 which, for example, functions as one means for inputting data to computer 90. A conventional frame grabber 94 is also coupled to computer 90, and will be discussed in more detail below.
Prior to directing the robot to begin identifying tools found in work space 84, computer 90 runs the training procedure. Finally, as each object in the work space is observed, it is transmitted (via communications link 86) to computer 90 which runs the testing procedure to be described below.
FIG. 13 shows the training procedure 100. The training procedure begins with an assumed (programmer set) input field size, N×N, number of coarsecode fields, n, and coarsecode field size, M×M (step 102). The following step (104) determines the included angles α, β, and γ (to some granularity), for all triangles which are formed by connecting all possible combinations of three pixels in a given coarsecode input field (i.e., having the size M×M).
Since this computation is expensive, and the combination of triplets for a given field size does not depend on the objects to be distinguished, these angles can be predetermined and stored in a file. Step 104 would then be modified to read the included angles corresponding to each combination of three pixels from a file, rather than determining them in real time.
Step 106 then sets up the correspondence between the angles, α, β, and γ (using the same granularity as in step 104), for example, using equation (7) such that all triplets of the angles which are members of the alternating group (i.e., the order of the angles matters, but not which angle comes first) point to a single memory location. This assures that all similar triangles will manipulate the same weight value as described above.
One possible implementation of step 106 is to use three matrices (w, w_{} angle and w_{} invar) linked with pointers. Each location in w (indexed by the triplet i,j,k representing the input pixels) points to a location in w_{} angle (indexed by the triplet α,β,γ representing the angles formed by the triplet i,j,k). Similarly, each location in w_{} angle points to a location in w_{} invar, also indexed by a triplet of angles α,β,γ such that the smallest angle is assigned to α. That is, w_{} angle [80] [60] [40] points to w_{} invar [40] [80] [60], as do the elements w_{} angle [60] [40] [80] and w_{} angle [40] [80] [60].
In step 108, the number of training samples can be either user input or programmer set. If two different types of tools are contained in bin 84, NUM_{} SAMPLES=2, and only a single output node is required for the neural network. Steps 110 to 116 read in the training data (breaking the input image into the subpattern data for each coarsecode field) and assign the expected output value t[I] to each training pattern. The expected output value t[I] is user determined, and is based upon the number of objects being distinguished. For example, if the network is distinguishing between two different objects in a manner which is invariant to translation, scaling, and inplane rotation, a singlelayer, thirdorder neural network having a single output node as illustrated in FIG. 11 can be used. If the hard limiting transfer function illustrated by equation (12) is used, one of the training patterns would be assigned the value of t=0, while the other training pattern would be assigned the value t=1.
Training, as described above, begins in step 118. Steps 118 to 128 determine the output, y, (by summing the weights for all triangles which are ON in the current training object in accordance with equation (12)) and compare the output, y, to the expected output value, t, for each training object. The weights, w, for each link are initially set to 0 or some other random number. The network is fully trained when it correctly recognizes all of the training images (step 130=yes), and then the testing procedure can be initiated. Otherwise, the weights are adjusted in step 132 by adding in the difference between the expected and generated output in accordance with, for example, equation (9), and going back to step 118.
The last procedure, the testing procedure illustrated in FIG. 14 is called each time the robot observes a new object. An image is grabbed in step 202 via a frame grabber 94 connected to the camera 82 which is mounted onto the arm of robot 80. The image is then binarized via thresholding in step 204, and its edges are extracted in step 206. The thresholding and edge extraction can be performed by conventional means. Steps 204 and 206 are usually necessary in a realtime vision system since the pattern is supplied directly from a camera. However, in other applications, steps 204 and 206 may not be required if the pattern is supplied as a binary edgeonly pattern.
It is preferable to input patterns comprised of only pixels located along the edge of the object so as to improve the network's invariance to scaling. This applies to both the training patterns utilized in step 112 and the test patterns utilized in the testing procedure. The use of outlines of the pattern (instead of the entire pattern) reduces the number of new pixel triplets which are introduced when the pattern is enlarged. However, if invariance to scale is not important, edge extraction is not necessary.
Steps 208 and 210 are performed so as to determine the coarse images (subpatterns) for the object to be tested, which are then supplied to the input nodes of the corresponding fields of the neural network in accordance with equation (12) to determine the output y. Step 210, in essence, produces the network's hypothesis about what the object in the camera's field of view is. This hypothesis, for example, is displayed 212 (or the robot is directed to grapple the object) and the testing procedure is repeated for the next image 214.
The present invention is applicable to many other applications in addition to the tool selecting application described above. The present invention can be used to recognize patterns of, for example, characters or aircraft. Moreover, coarsecoding can be used whenever coded data is input to a neural network for recognition purposes or for other analysis, in order to reduce the number of interconnections required in the neural network.
The coarse coding technique described above was evaluated using the expanded version of the T/C problem. (See the aboveincorporated references by Reid et al. and Spirkovska et al. for a more detailed description of the T/C problem.) Implementing coarse coding using the intersection of fields described above, the input image resolution for the T/C problem was increased to 127×127 pixels using 9 coarsecode fields of 15×15 coarse pixels. The network was trained on just two images: the largest T and the largest C possible within the 127×127 input field. Training took just 5 passes.
A complete test set of translated, scaled and 1° rotated views of the two objects in a 127×127 pixel input field consists of about 135 million images. Assuming a test rate of 200 images per hour, it would take about 940 computer months to test all possible views. Accordingly, testing was limited to a representative subset consisting of four sets:
(1) All translated views, but with the same orientation and scale as the training images.
(2) All views rotated inplane at 1° intervals, centered at the same position as the training images but only 60% of the size of the training images.
(3) All scaled views of the objects, in the same orientation and centered at the same position as the training images.
(4) A representative subset of approximately 100 simultaneously translated, rotated, and scaled views of the two objects.
The network achieved 100% accuracy on test images in sets (1) and (2). Furthermore, the network recognized, with 100% accuracy, all scaled views, from test set (3), down to 38% of the original size. Objects smaller than 38% were classified as Cs. Finally, for test set (4), the network correctly recognized all images larger than 38% of the original size, regardless of the orientation or position of the test image.
A thirdorder network also learned to distinguish between practical images, such as a space shuttle orbiter 20 versus an F15 aircraft 25 (see FIG. 15A and 15B) in up to a 127×127 pixel input field. In this case, training took just six passes through the training set, which consisted of just one (binary, edge only) view of each aircraft. As with the T/C problem, the network achieved 100% recognition accuracy of translated and inplane rotated views of the two images. Additionally, the network recognized images scaled to almost half the size of the training images, regardless of their position or orientation.
The minimum possible coarsecode field size is dependent on the training images. The network is unable to distinguish between the training images when the size of each coarse pixel is increased to the point where the training images no longer produced unique coarsecoded representations. As an example, with the T/C problem, the minimum coarsecode field size which still produces unique representations is 3×3 pixels.
In contrast, the maximum limit is determined by the HONN architecture and the memory available for its implementation, and not by the coarsecoding technique itself. The number of possible triplet combinations in a thirdorder network is N^{2} choose3 for an N×N pixel input field. Thus, given the memory constraints of the sun 3/60 discussed above, the maximum possible coarsecode field size was 18×18 pixels.
Regarding the number of coarsecode fields which can be used and still achieve object recognition invariant to translation, scaling, and inplane rotation, the minimum is one field whereas the maximum has not been reached. A minimum of one coarsecode field represents the noncoarsecoded HONN case discussed with respect to FIGS. 2 and 5. In order to determine the limit for the maximum number of coarsecode fields possible, simulations were run on the T/C problem coded with a variable number of 3×3 coarsecode fields. A thirdorder network was able to distinguish between the two characters in less than 10 passes in an input field size up to 4095×4095 pixels using 2,047 fields. An input field resolution of 4096×4096 was also achieved using 273 fields of 16×16 coarse pixels. Increasing the number of fields beyond this was not attempted because 4096×4096 is the maximum resolution available on most image processing hardware which would be used in a complete HONNbased vision system.
The weighting techniques and threshold functions usable in a HONN constructed according to the present invention are not limited to the two examples provided above in equations (9) and (12). For example, see the aboveincorporated references to Reid et al., Spirkovska et al., and Giles et al., which disclose different weight determination procedures (with or without invariance constraints) and different threshold functions (which, for example, produce output signals from the sets (1, 1) or (1, 0, 1) instead of (0, 1) as described above).
While this invention has been described in conjunction with specific embodiments thereof, many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, the preferred embodiments of the invention as set forth herein are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the invention as defined in the following claims.
Claims (30)
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

US07908141 US5333210A (en)  19920702  19920702  Method and system for pattern analysis using a coarsecoded neural network 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

US07908141 US5333210A (en)  19920702  19920702  Method and system for pattern analysis using a coarsecoded neural network 
Publications (1)
Publication Number  Publication Date 

US5333210A true US5333210A (en)  19940726 
Family
ID=25425268
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

US07908141 Expired  Fee Related US5333210A (en)  19920702  19920702  Method and system for pattern analysis using a coarsecoded neural network 
Country Status (1)
Country  Link 

US (1)  US5333210A (en) 
Cited By (5)
Publication number  Priority date  Publication date  Assignee  Title 

US5459636A (en) *  19940114  19951017  Hughes Aircraft Company  Position and orientation estimation neural network system and method 
US5903884A (en) *  19950808  19990511  Apple Computer, Inc.  Method for training a statistical classifier with reduced tendency for overfitting 
US5995953A (en) *  19930219  19991130  International Business Machines Corporation  Method for verification of signatures and handwriting based on comparison of extracted features 
US20040150538A1 (en) *  20030121  20040805  Samsung Electronics Co., Ltd.  Apparatus and method for selecting length of variable length coding bit stream using neural network 
US20060261168A1 (en) *  20050520  20061123  Polaroid Corporation  Print medium feature encoding and decoding 
Citations (3)
Publication number  Priority date  Publication date  Assignee  Title 

US4802103A (en) *  19860603  19890131  Synaptics, Inc.  Brain learning and recognition emulation circuitry and method of recognizing events 
US4803736A (en) *  19851127  19890207  The Trustees Of Boston University  Neural networks for machine vision 
US5151951A (en) *  19900315  19920929  Sharp Kabushiki Kaisha  Character recognition device which divides a single character region into subregions to obtain a character code 
Patent Citations (3)
Publication number  Priority date  Publication date  Assignee  Title 

US4803736A (en) *  19851127  19890207  The Trustees Of Boston University  Neural networks for machine vision 
US4802103A (en) *  19860603  19890131  Synaptics, Inc.  Brain learning and recognition emulation circuitry and method of recognizing events 
US5151951A (en) *  19900315  19920929  Sharp Kabushiki Kaisha  Character recognition device which divides a single character region into subregions to obtain a character code 
NonPatent Citations (30)
Title 

Fukaya et al., "TwoLevel Neural Networks: Learning by Interaction with Environment", IEEE First Int. Conf. on Neural Networks, Jun. 21, 1987. 
Fukaya et al., Two Level Neural Networks: Learning by Interaction with Environment , IEEE First Int. Conf. on Neural Networks, Jun. 21, 1987. * 
Giles et al., "Encoding Geometric Invariances in HigherOrder Neural Networks", Neural Information Processing Systems, American Institute of Physics Conference Proceedings, 1988, pp. 301309. 
Giles et al., "Learning, Invariance, and Generalization in HighOrder Neural Networks", Applied Optics, 1987, vol. 26, pp. 49724978. 
Giles et al., Encoding Geometric Invariances in Higher Order Neural Networks , Neural Information Processing Systems, American Institute of Physics Conference Proceedings, 1988, pp. 301 309. * 
Giles et al., Learning, Invariance, and Generalization in High Order Neural Networks , Applied Optics, 1987, vol. 26, pp. 4972 4978. * 
Lapedes et al., "Programming a Massively Parallel, Computation Universal System: Static Behavior", American Institute of Physics, pp. 283298, Mar. 1986. 
Lapedes et al., Programming a Massively Parallel, Computation Universal System: Static Behavior , American Institute of Physics, pp. 283 298, Mar. 1986. * 
Li et al., "Invariant Object Recognition Based on a Neural Network of Cascaded RCE Nets", 1990, vol. 2, pp. 845854. 
Li et al., Invariant Object Recognition Based on a Neural Network of Cascaded RCE Nets , 1990, vol. 2, pp. 845 854. * 
Lippmann, "An Introduction to Computing with Neural Nets", IEEE ASSP Magazine, pp. 422, Apr. 1987. 
Lippmann, "Pattern Classification Using Neural Networks", IEEE Communications Magazine, pp. 4756,Nov. 1989. 
Lippmann, An Introduction to Computing with Neural Nets , IEEE ASSP Magazine, pp. 4 22, Apr. 1987. * 
Lippmann, Pattern Classification Using Neural Networks , IEEE Communications Magazine, pp. 47 56,Nov. 1989. * 
Nielson, "Neurocomputing Applications: Sensor Processing, Control, and Data Analysis", Neurocomputing, 1990 AddisonWesley. 
Nielson, Neurocomputing Applications: Sensor Processing, Control, and Data Analysis , Neurocomputing, 1990 Addison Wesley. * 
Reid et al., "Rapid Training of HigherOrder Neural Networks for Invariant Pattern Recognition", Proceedings of Joint Int. Conf. on Neural Networks, Washington, D.C. Jun. 1822, 1989, vol. 1, pp. 689692. 
Reid et al., "simultaneous position, scale, and rotation invariant pattern classification using thirdorder neural networks", Int. J. of Neural Networks, 1, 1989, pp. 154159. 
Reid et al., Rapid Training of Higher Order Neural Networks for Invariant Pattern Recognition , Proceedings of Joint Int. Conf. on Neural Networks, Washington, D.C. Jun. 18 22, 1989, vol. 1, pp. 689 692. * 
Reid et al., simultaneous position, scale, and rotation invariant pattern classification using third order neural networks , Int. J. of Neural Networks, 1, 1989, pp. 154 159. * 
Rosen et al., "Adaptive CoarseCoding for Neural Net Controllers", 1991, vol. 1, pp. 493499. 
Rosen et al., Adaptive Coarse Coding for Neural Net Controllers , 1991, vol. 1, pp. 493 499. * 
Rosenfeld et al, "A Survey of CoarseCoded Symbol Memories", Proc. of the 1988 Connectionist Models Summer School, CarnegieMellon Univ., Jun. 1726, 1988, pp. 256264. 
Rosenfeld et al, A Survey of Coarse Coded Symbol Memories , Proc. of the 1988 Connectionist Models Summer School, Carnegie Mellon Univ., Jun. 17 26, 1988, pp. 256 264. * 
Specht, "Probabilistic Neural Networks and the Polynomial Adaline as Complimentary Techniques for Classification", IEEE Transactions on Neural Networks, vol. 1, No. 1, pp. 111121, Mar. 1990. 
Specht, Probabilistic Neural Networks and the Polynomial Adaline as Complimentary Techniques for Classification , IEEE Transactions on Neural Networks, vol. 1, No. 1, pp. 111 121, Mar. 1990. * 
Spirkovska et al., "Connectivity Strategies for HigherOrder Neural Networks Applied to Pattern Recognition", Int. Joint Conf. on Neural Networks, San Diego, Calif., Jun. 1721, 1990, vol. I, pp. 2126. 
Spirkovska et al., Connectivity Strategies for Higher Order Neural Networks Applied to Pattern Recognition , Int. Joint Conf. on Neural Networks, San Diego, Calif., Jun. 17 21, 1990, vol. I, pp. 21 26. * 
Yager, "On the Aggregation of Processing Units in Neural Networks", Machine Intelligence Institute, Iona College, pp. II327II333. 
Yager, On the Aggregation of Processing Units in Neural Networks , Machine Intelligence Institute, Iona College, pp. II 327 II 333. * 
Cited By (8)
Publication number  Priority date  Publication date  Assignee  Title 

US5995953A (en) *  19930219  19991130  International Business Machines Corporation  Method for verification of signatures and handwriting based on comparison of extracted features 
US5459636A (en) *  19940114  19951017  Hughes Aircraft Company  Position and orientation estimation neural network system and method 
US5903884A (en) *  19950808  19990511  Apple Computer, Inc.  Method for training a statistical classifier with reduced tendency for overfitting 
US20040150538A1 (en) *  20030121  20040805  Samsung Electronics Co., Ltd.  Apparatus and method for selecting length of variable length coding bit stream using neural network 
US6885320B2 (en) *  20030121  20050426  Samsung Elecetronics Co., Ltd.  Apparatus and method for selecting length of variable length coding bit stream using neural network 
US20060261168A1 (en) *  20050520  20061123  Polaroid Corporation  Print medium feature encoding and decoding 
WO2006127253A3 (en) *  20050520  20070607  Polaroid Corp  Print medium feature encoding and decoding 
US7905409B2 (en)  20050520  20110315  Senshin Capital, Llc  Print medium feature encoding and decoding 
Similar Documents
Publication  Publication Date  Title 

Abraham et al.  Hybrid intelligent systems for stock market analysis  
US5111516A (en)  Apparatus for visual recognition  
US6460127B1 (en)  Apparatus and method for signal processing  
Hong et al.  Compact region extraction using weighted pixel linking in a pyramid  
US5058180A (en)  Neural network apparatus and method for pattern recognition  
US5524064A (en)  Device for coding still images  
Mignotte et al.  Hybrid genetic optimization and statistical model based approach for the classification of shadow shapes in sonar imagery  
US6134340A (en)  Fingerprint feature correlator  
Shih  Image processing and mathematical morphology: fundamentals and applications  
Kussul et al.  Improved method of handwritten digit recognition tested on MNIST database  
US5832138A (en)  Image processing method and apparatus for extracting lines from an image by using the Hough transform  
Adeli et al.  An adaptive conjugate gradient learning algorithm for efficient training of neural networks  
US5247583A (en)  Image segmentation method and apparatus therefor  
Nasrabadi et al.  Object recognition by a Hopfield neural network  
US4611347A (en)  Video recognition system  
US4481664A (en)  Process for inspecting objects showing patterns with dimensional tolerances and reject criteria varying with the locations of said patterns and apparatus and circuits for carrying out said process  
Ueda et al.  Learning visual models from shape contours using multiscale convex/concave structure matching  
Iwata et al.  A pattern recognition system using evolvable hardware  
US5048100A (en)  Self organizing neural network method and system for general classification of patterns  
Han et al.  The use of maximum curvature points for the recognition of partially occluded objects  
US5696838A (en)  Pattern searching method using neural networks and correlation  
US5572628A (en)  Training system for neural networks  
US5832108A (en)  Pattern recognition method using a network and system therefor  
Jain et al.  Industrial applications of neural networks  
Widrow et al.  Learning phenomena in layered neural networks 
Legal Events
Date  Code  Title  Description 

AS  Assignment 
Owner name: NATIONAL AERONAUTICS AND SPACE ADMINISTRATION, THE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNORS:SPIRKOVSKA, LILJANA;REID, MAX B.;REEL/FRAME:006211/0466 Effective date: 19920701 

LAPS  Lapse for failure to pay maintenance fees  
FP  Expired due to failure to pay maintenance fee 
Effective date: 19980729 