US7274819B2 - Pattern recognition apparatus using parallel operation - Google Patents
Pattern recognition apparatus using parallel operation Download PDFInfo
- Publication number
- US7274819B2 US7274819B2 US10/156,942 US15694202A US7274819B2 US 7274819 B2 US7274819 B2 US 7274819B2 US 15694202 A US15694202 A US 15694202A US 7274819 B2 US7274819 B2 US 7274819B2
- Authority
- US
- United States
- Prior art keywords
- pattern
- detected
- feature
- category
- consolidation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime, expires
Links
- 238000003909 pattern recognition Methods 0.000 title claims abstract description 56
- 238000007596 consolidation process Methods 0.000 claims abstract description 132
- 238000001514 detection method Methods 0.000 claims abstract description 132
- 238000000034 method Methods 0.000 claims abstract description 81
- 230000008569 process Effects 0.000 claims abstract description 53
- 238000012545 processing Methods 0.000 claims description 29
- 239000000284 extract Substances 0.000 claims description 4
- 238000012567 pattern recognition method Methods 0.000 claims 14
- 210000002569 neuron Anatomy 0.000 description 46
- 230000015654 memory Effects 0.000 description 27
- 238000010586 diagram Methods 0.000 description 14
- 238000005070 sampling Methods 0.000 description 13
- 210000000225 synapse Anatomy 0.000 description 13
- 238000003384 imaging method Methods 0.000 description 12
- 210000004027 cell Anatomy 0.000 description 11
- 239000013598 vector Substances 0.000 description 10
- 238000013528 artificial neural network Methods 0.000 description 7
- 230000000946 synaptic effect Effects 0.000 description 7
- 230000009466 transformation Effects 0.000 description 7
- 238000010276 construction Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000009467 reduction Effects 0.000 description 4
- 230000003321 amplification Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 230000002964 excitative effect Effects 0.000 description 3
- 238000007667 floating Methods 0.000 description 3
- 230000002401 inhibitory effect Effects 0.000 description 3
- 238000003199 nucleic acid amplification method Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 239000003990 capacitor Substances 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 210000001787 dendrite Anatomy 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000008054 signal transmission Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 230000036982 action potential Effects 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 210000003050 axon Anatomy 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001054 cortical effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- 239000002784 hot electron Substances 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000005309 stochastic process Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/955—Hardware or software architectures specially adapted for image or video understanding using specific electronic processors
Definitions
- the present invention relates to a pattern recognition apparatus using a parallel operation.
- Image/speech recognition techniques can be generally classified into two types. In one type, a recognition algorithm specialized for recognition of a particular type of image/voice is described in the form of computer software and executed sequentially. In the other type, recognition is performed using a dedicated parallel image processor (such as a SIMD or MIMD machine).
- a dedicated parallel image processor such as a SIMD or MIMD machine.
- One widely-used image recognition algorithm is to calculate a feature value indicating the degree of similarity between an image of an object and an object model.
- model data of an object to be recognized is represented in the form of a template model, and recognition is performed by calculating the degree of similarity between an input image (or a feature vector thereof) and a template or by calculating a high-order correlation coefficient.
- the calculation may be performed by means of hierarchical parallel processing (Japanese Examined Patent Application Publication No. 2741793).
- Japanese Patent Laid-Open No. 6-176158 discloses a technique in which the degree of similarity of feature vectors of an input pattern with respect to a category is calculated individually for each feature vector, and the overall degree of similarity is determined using the degrees of similarity of respective feature vectors normalized with respect to a maximum degree of similarity. Finally, recognition is performed on the basis of the overall degree of similarity.
- Japanese Patent Laid-Open No. 9-153021 discloses a parallel processing apparatus in which an input digital signal is divided into a plurality of parts and the divided parts are processed in parallel by a plurality of processors, wherein division of the input digital signal into the plurality of parts is performed such that the calculation cost is minimized and the performance is optimized depending on the input digital signal.
- Another problem is that when the size of an object in an image to be recognized is different from that of object model, or when an image includes a plurality of objects with different sizes, the technique encounters difficulty. Recognition may be possible if a plurality of object models corresponding to various sizes are prepared and if the degree of similarity is calculated one by one for all object models corresponding to different sizes. However, this needs a large-scale circuit (large memory size) and the processing efficiency is low.
- the present invention which achieves these objectives relates to a pattern recognition apparatus comprising time-division data inputting means for inputting data by time-sequentially inputting pattern data, which is part of the input data and which has a predetermined size, a plurality of times; position information inputting means for inputting position information of the pattern data in the input data; feature detection means including an operation element having a predetermined operation characteristic, for detecting a feature of a predetermined middle-order or high-order category from the pattern data; time-sequential consolidation means for time-sequentially consolidating the outputs from the feature detection means on the basis of the position information and the category of the feature and producing feature detection map information; and judgment means for outputting position information and category information of a high-order feature present in the input data, on the basis of the output from the time-sequential consolidation means.
- the present invention which achieves these objectives relates to a pattern recognition apparatus comprising data inputting means for scanning pattern data with a predetermined size, which is part of input data, thereby inputting the pattern data; detection means for detecting a predetermined feature from the pattern data; scanning position changing means for changing, on the basis of the type of the feature, scanning position at which the pattern data is scanned by the data inputting means; consolidation means for consolidating a plurality of features detected at different scanning positions and determining, on the basis of consolidation result, the likelihood of presence of a specific pattern; and judgment means for outputting position information indicating the position of the specific pattern and information indicating the type of the specific pattern, on the basis of the output from the consolidation means.
- FIG. 1 is a diagram illustrating main parts of a first embodiment of the present invention.
- FIG. 2 is a diagram illustrating main parts of a local area recognition module.
- FIG. 3A is a diagram illustrating coupling between layers.
- FIG. 3B is a diagram illustrating a basic circuit configuration of a pulse generator serving as a neuron element.
- FIG. 3C is a diagram illustrating another example of coupling between a synapse circuit and a neuron element.
- FIG. 4 is a diagram illustrating a time-sequential consolidation module used in the first embodiment.
- FIG. 5 is a flow chart illustrating a process performed by the time-sequential consolidation module.
- FIG. 6A is a table showing an example of data in the form of a list representing relative positions of middle-order features.
- FIG. 6B is a diagram illustrating a process of detecting middle-order features.
- FIG. 7 is a diagram illustrating a judgment unit.
- FIG. 8 is a flow chart illustrating a main process according to a second embodiment.
- FIG. 9 is a flow chart illustrating a main process according to a third embodiment.
- FIG. 10 is a diagram illustrating main parts of a fourth embodiment.
- FIG. 11 is a diagram illustrating main parts of a fifth embodiment.
- FIG. 12 is a flow chart illustrating a main process according to a fifth embodiment.
- FIG. 13 is a diagram illustrating main parts of a imaging apparatus that is an example of an apparatus using a pattern recognition apparatus.
- FIG. 14 is a flow chart illustrating a process of judging a high-order pattern according to the second embodiment.
- FIG. 1 generally illustrates a pattern recognition apparatus according to the first embodiment.
- the pattern recognition apparatus includes a local area scanning unit 1 , an image inputting unit 2 , a local area recognition module 3 , a time-sequential consolidation module 4 , a judgment unit 5 , and a control unit 6 for controlling the operations of the above units or modules. Functions of the respective units/modules are described below.
- the local area scanning unit 1 defines, in the data input via the image inputting unit 2 , a local area with a rectangular shape (block shape or another shape) having a size determined by the control unit 6 , at a sampling point position which is changed one by one.
- a current local area partially overlap with a previous local area so that no reduction in detection accuracy occurs when a feature is present near a boundary of these local areas.
- the local area scanning unit 1 outputs a read control signal to the image inputting unit 2 (a sensor such as a CMOS sensor).
- the image inputting unit 2 reads an image signal from the block-shaped local area and provides the resultant signal to the local area scanning unit 1 .
- the above reading process may be performed in accordance with a known technique (for example, a technique disclosed in Japanese Patent Laid-Open No. 11-196332, filed by the present applicant).
- a CCD is used as the sensor, an image is temporarily stored in a frame memory or the like, and then the image stored therein is scanned from a specified block-shaped local area to another.
- the local area recognition module 3 includes a hierarchical neural network circuit for detecting geometrical features of various orders from low to high.
- the local area recognition module 3 receives the data of the block-shaped local area defined above and informs the consolidation module 4 whether the local area includes a middle-order or high-order pattern of a predetermined category.
- the time-sequential consolidation module 4 receives position information from the local area scanning unit 1 and consolidates the data, associated with block-shaped local areas at different positions, output from the local area recognition module 3 on the basis of the position information. On the basis of the consolidation result, the time-sequential consolidation module 4 outputs information indicating whether a specific pattern has been detected. If the time-sequential consolidation module 4 obtains a detection signal (position information and category information) of a high-order pattern (of an object to be recognized) from the local area recognition module 3 , the time-sequential consolidation module 4 directly transfers the detection information to the judgment unit 5 .
- the judgment unit 5 checks the output of the time-sequential consolidation module 4 on the basis of a judgment parameter supplied from the control unit 6 and outputs information indicating the position of the detected pattern in the input data and information indicating the category of the detected pattern.
- the local area recognition module 3 is described in detail below with reference to FIG. 2 .
- This module 3 mainly deals with information associated with recognition (detection) of an object feature or a geometric feature in a local area of input data.
- the local area recognition module 3 has a structure similar to the convolutional network structure (LeCun, Y. and Bengio, Y., 1995, “Convolutional Networks for Images Speech, and Time Series” in Handbook of Brain Theory and Neural Networks (M. Arbib, Ed.), MIT Press, pp. 255-258), except that reciprocal local connection between layers in the network is allowed (as will be described later).
- the final output indicates the recognition result, that is, the category of the recognized object and the position thereof in the input data.
- a data input layer 101 inputs local area data from a photoelectric conversion device such as a CMOS sensor or a CCD device (image inputting unit 2 ), under the control of the local area scanning unit 1 .
- the data input layer 101 may input high-order data obtained as a result of analysis (such as principal component analysis or vector quantization) performed by particular data analysis means.
- a first feature detection layer 102 (1, 0), performs, by means of Gabor wavelet transformation or another multiple resolution processing method, detection of a low-order local feature (that may include a color feature in addition to a geometric feature) of an image pattern received from the data input layer 101 , for a plurality of scale levels or a plurality of feature categories in the local area centered at each scanning point.
- the feature detection layer 102 (1, 0) has a receptive field 105 whose structure corresponds to the type of the feature (for example, in a case where a line segment in a particular direction is extracted as a geometric feature, the receptive field has a structure corresponding to the direction), and the feature detection layer 102 (1, 0) includes neuron elements that generate pulse trains in accordance with the likelihood of that feature's presence.
- feature detection layers 102 (1, k) form, as a whole, processing channels for various resolutions (scale levels).
- a set 104 of feature detection cells having receptive field structures including Gabor filter kernels having different orientation selectivity for the same scale level, form a processing channel in the feature detection layer 102 (1, 0).
- feature detection cells in a following layer 102 (1, 1) which receive data output from these feature detection cells in the feature detection layer 102 (1, 0) (and which detect a higher-order feature), belong to the same processing channel as that described above.
- feature detection layers 102 (1, k) (k>1)
- feature detection cells that receive data output from a set 106 of the feature a set of detection cells that form a particular channel in a feature consolidation layer 103 (2, k ⁇ 1) which will be described in more detail below, belong to the same channel as that particular channel.
- a Gabor wavelet has a shape obtained by modulating, using a Gaussian function, a sinusoidal wave in a particular direction with a particular spatial frequency.
- a set of filters is provided to achieve the wavelet transformation, wherein each filter has a similar function shape but is different in principal direction and size.
- a wavelet has a localized function shape in the spatial frequency domain and also in the real spatial domain, and that it has minimum joint uncertainty in position and spatial frequency. That is, the wavelets are functions that are most localized in both the real space and frequency space (J, G. Daugman (1985), “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters”, Journal of Optical Society of America A, vol. 2, pp. 1160-1169).
- processes for different scale levels (resolutions) assigned to the respective channels are performed to detect and recognize features of various orders from low to high by means of the hierarchical parallel processing.
- the feature consolidation layer 103 (2, 0) includes neuron elements which output pulse trains and which have predetermined receptive field structures (“receptive field” refers to a range of connection with output elements in the immediately preceding layer, and “receptive field structure” refers to a distribution of connection weights).
- the feature consolidation layer 103 (2, 0) consolidates the outputs from neuron elements of the same receptive field in the feature detection layer 102 (1, 0) (by means of sub-sampling or the like using local averaging).
- the respective neurons in a feature consolidation layer have common receptive field structures assigned to that feature consolidation layer.
- the following feature detection layers 102 ((1, 1), (1, 2), . . . , (1, M)) and the feature consolidation layer 103 ((2, 1), (2, 2), . . . , (2, M)) each have their own receptive field structures, wherein the feature detection layers 102 (1, 1), (1, 2), . . . , and (1, M) detect different features, and the feature consolidation layers 103 (2, 1), (2, 2), . . . , and (2, M) respectively consolidate the features supplied from the feature detection layer at the preceding stage.
- the feature detection layers 102 are connected (interconnected) so that the feature detection layers 102 can receive the outputs from the cells, belonging to the same channels, in the feature consolidation layer at the preceding stage.
- sub-sampling is performed, for example, to average the outputs from feature detection cells in local areas (local receptive fields of neurons in the feature consolidation layer) for each of the feature categories.
- FIG. 3A is a diagram illustrating connection between layers. As shown in FIG. 3A , neuron elements 201 in different layers are connected with each other via a signal transmission element 203 (an interconnecting line or a delay line) corresponding to an axial filament or a dendrite of a neuron and via a synapse circuit 202 .
- FIG. 3A shows connection structure associated with outputs (inputs when viewed from feature detection (consolidation) cells) from neurons (n i ) of feature consolidation (detection) cells forming a receptive field of a specific feature detection (consolidation) cells.
- a signal transmission element denoted by a bold line serves as a common bus line, through which pulse signals output from a plurality of neurons are time-sequentially transmitted. Signals may be fed back from subsequent cells in a similar manner. More specifically, input signals and output signals may be treated using the same configuration by means of a time-division technique, or using a construction including dual systems similar to that shown in FIG. 3A , for inputting signals (to dendrites) and outputting signals (from axons).
- excitatory connection results in amplification of a pulse signal.
- inhibitory connection results in attenuation of a pulse signal.
- amplification and attenuation can be achieved by one of amplitude modulation, pulse width modulation, pulse phase modulation, and pulse frequency modulation.
- the synapse circuit 202 is mainly used to perform pulse phase modulation, whereby amplification of a signal is converted to a substantial advance of pulse arrival time corresponding to a feature and attenuation is converted to a substantially delay. That is, the synapitc connection gives arrival position (phase) in time, corresponding to a feature, at a destination neuron.
- excitatory connection results in an advance of pulse arrival time with respect to a reference phase
- inhibitory connection results in a delay of pulse arrival time.
- each neuron element n j is of the integrate-and-fire type that will be described later and outputs a pulse signal (spike train).
- a synapse circuit and a neuron element may be combined into a circuit block as shown in FIG. 3C .
- Each neuron element is based on a model extended from a fundamental neuron model called an integrate-and-fire neuron model. These neurons are similar to the integrate-and-fire neurons in that when the linear sum, in time/space domain, of input signals (pulse train corresponding to action potential) exceeds a threshold value, the neuron fires and outputs a pulse signal.
- FIG. 3B shows an example of a basic circuit configuration of a pulse generator (CMOS circuit) constructed so as to serve as a neuron element, wherein this circuit configuration is based on a known circuit (IEEE Trans. on Neural Networks Vol. 10, pp. 540). This circuit is configured so that both excitatory and inhibitory inputs can be input.
- CMOS circuit pulse generator
- the degree of consistency between a middle-order pattern detected in a local area during the scanning process and a high-order pattern is evaluated in terms of the relative position and the type.
- the type and the position of a middle-order pattern that will be detected next is predicted, and the scanning position is jumped in accordance with the prediction. This makes it possible to detect a pattern more efficiently than can be detected by means of uniform scanning such as raster scanning.
- the time-sequential consolidation module 4 includes a high-order pattern map generation unit 41 for generating a map of detection levels (and, if necessary, features and types) of high-order patterns and positions thereof, a middle-order pattern consolidation unit 42 for outputting a predicted position (that will be described later) of a middle-order pattern that will be detected and also outputting a category of a high-order pattern having a highest matching degree, a memory 43 for storing data (e.g., template pattern data) representing a category of a high-order pattern, and a primary storage 44 for storing a predicted position (that will be described later) of a middle-order pattern).
- data e.g., template pattern data
- the data output from the local area recognition module 3 to the time-sequential consolidation module 4 includes a high-order pattern (such as a face to be finally recognized), information indicating whether there is a middle-order pattern (such as an eye, nose, or a mouth on the face) that can be an element of the high-order pattern, and information indicating the position of the middle-order pattern.
- a high-order pattern such as a face to be finally recognized
- information indicating whether there is a middle-order pattern such as an eye, nose, or a mouth on the face
- information indicating the position of the middle-order pattern such as an eye, nose, or a mouth on the face
- a middle-order pattern is detected at a scanning position within a local area and no high-order pattern including the detected middle-order pattern is detected in the local area of the input data, (this can occur when the size of the high-order pattern is greater than the size of the local area), there is a possibility that the middle-order pattern will be consolidated via a time-sequential consolidation process performed by the middle-order pattern consolidation unit 42 in order to detect a high-order pattern having a greater size using the consolidated result.
- a signal output from a neuron in the feature consolidation layer 103 (2, m) responsible for middle-order feature detection and signals output from neurons in the final feature consolidation layer 103 (2, M) for giving detection information of a high-order feature (object to be detected) are supplied to the time-sequential consolidation module 4 via a bus line.
- a signal output from a neuron in the feature consolidation layer 103 (2, m) is supplied to both the next feature detection layer 102 (1, m+1) and time-sequential consolidation module 4 via the bus line.
- Transmission among neurons using a pulse signal may be performed using a technique based on, for example, the AER (Address Event Representation) technique (Lazzaro, et al., 1993, Silicon Auditory Processors as Computer Peripherals, In Tourestzky, D. (ed), Advances in Neural Information Processing Systems 5 , San Mateo, Calif., Morgan Kaufmann Publishers).
- AER Address Event Representation
- the prediction unit 46 of the time-sequential consolidation module 4 selects one candidate for a high-order pattern that can include the detected middle-order pattern and predicts, using a method that will be described later, a category and a position (arrangement) of other middle-order pattern that will be detected in the candidate for the high-order pattern.
- the middle-order pattern consolidation unit 42 then outputs, to the judgment unit 5 , a signal which has a level depending on whether the pattern of the predicted category will be detected at the predicted position (the output level becomes high if the predicted pattern will be detected at the prediction position) and which thus indicates a detection probability (detection likelihood) that a pattern of the predicted category will be detected.
- the control unit 6 obtains information indicating the position of the predicted middle-order pattern from the time-sequential consolidation module 4 and outputs a sampling point control signal to the local area scanning unit 1 so that the local area scanning unit 1 can next scan a local area centered at the position of the predicted middle-order pattern. This process will be described in further detail later with reference to FIG. 5 .
- the local area recognition module 3 detects, in a local area, a high-order pattern with an output level higher than a predetermined threshold value
- the local area recognition module 3 outputs information of the category (detection probability or detection likelihood) and position information of an object detected in that local area to the time-sequential consolidation module 4 .
- the control unit 6 obtains position information of the detected pattern from the local area scanning unit 1 and transfers the position information to the judgment unit 5 .
- the maximum value of the outputs from neurons of a feature consolidation module belonging to a particular category fNM is greater than a predetermined threshold value, the maximum output of the neuron is supplied, as information indicating the category and position of detected object, to the time-sequential consolidation module 4 .
- the maximum neuron output associated with the high-order pattern is supplied to the high-order pattern map generation unit 41 of the time-sequential consolidation module 4 , while, as for the middle-order pattern, the neuron output of the feature consolidation layer 103 (2, m) is supplied to the middle-order pattern consolidation unit 42 via the bus line. Furthermore, in the time-sequential consolidation module 4 , the above-described process is performed on both the high-order pattern and the middle-order pattern.
- the middle-order pattern consolidation unit 42 is a signal processing circuit (so-called middleware) for outputting a predicted category of an undetected middle-order pattern included in a high-order pattern that can include the detected middle-order pattern and also outputting a predicted position thereof near the detected middle-order pattern.
- middleware a signal processing circuit
- the class of a specific object high-order pattern such as a pattern of a face of a human being viewed from front
- a detected middle-order pattern a pattern of an element of the object, such as a pattern of an eye
- the class of another middle-order pattern for example, the other eye, a nose, or a mouth
- the circuit (prediction unit 46 ) that performs prediction does not perform a complicated operation associated with stochastic process or the like, rather the circuit is constructed using a logic circuit so as to refer to combinatory list data represented in the form of a dictionary with associated data (indicating the relative position vectors of possible middle-order patterns) and output data.
- the list data is given in the form of a linked list of middle-order patterns included in a high-order pattern, and associated data represents the distance and direction of each middle-order pattern using relative position vectors.
- the predicted position varies depending on the class of the detected middle-order pattern and the type of the processing channel to which the neuron belongs which has the maximum output in the feature consolidation layer 103 (2, m) of the local area recognition module 3 . That is, in the present embodiment, the differences in the size of object to be detected and the feature are reflected in the differences in the processing channels. That is, the positions (predicted positions) of middle-order patterns that have not been detected yet vary depending on the size.
- step S 501 category information of high-order patterns that can include, as an element thereof, a middle-order pattern detected by the local area recognition module 3 is read from the memory 43 of the time-sequential consolidation module 4 .
- step S 502 the category and the position of a middle-order pattern having a high probability of being detected next near the already-detected middle-order pattern are determined for each of the high-order patterns and stored in the primary storage 44 .
- step S 503 it is determined whether there can be a plurality of undetected middle-order patterns near the predicted position. If it is determined that there can be a plurality of such patterns, a pattern that is closer to the predicted position in a principal scanning direction (for example, to the right, or from upper left to bottom right) is selected (S 504 ).
- step S 505 output data indicating the predicted position of the pattern selected by the middle-order pattern consolidation unit 42 is input to the control unit 6 and used by the control unit 6 to control the scanning position.
- the control unit 6 converts the predicted position information into position control data to be used by the local area scanning unit 1 to define the position of the local area.
- the resultant position control data is supplied to the local area scanning unit 1 .
- step S 506 the output from the middle-order feature consolidation layer 103 (2, m), which indicates the degree of consistency between a detected middle-order pattern and a candidate for a high-order pattern (the degree of consistency is determined one by one for all high-order pattern candidates), is supplied to the middle-order pattern consolidation unit 42 from the local area recognition module 3 .
- the middle-order pattern consolidation unit 42 of the time-sequential consolidation module 4 acquires, under the control of the scanning unit 1 , the recognition result of the local area data (that is the same as that selected in step S 504 ) from the local recognition module 3 and judges the matching with the category of the already-detected middle-order pattern (S 507 ) as described below. In the case where the judgment indicates that the matching is good, it is checked that there is a middle-order pattern which has not been detected yet (S 508 ) and the flow returns to the step S 502 if there is. In step S 509 , the flow returns to the step S 501 if there is a high-order pattern which has not been tested yet.
- the above-described prediction and judgment at the middle-order pattern level is performed repeatedly as long as there is a middle-order pattern that has not been detected yet.
- information indicating the category of the high-order pattern judged as having a high degree of matching and the detection level (indicating the detection probability or the detection likelihood) thereof are output to the judgment unit 5 (S 510 ).
- step S 507 The judgment regarding the degree of matching of the middle-order pattern on the basis of the category of the high-order pattern is described below. If category-to-configuration correspondence of remaining middle-order patterns that match the category of the high-order pattern and the category of the already-detected middle-order pattern are stored in advance in the form of a table in a memory, it is possible to make judgment by means of a simple logic decision process using a simple logic circuit.
- FIG. 6A An example of data indicating the correspondence is shown in FIG. 6A .
- the data indicating the correspondence is given in the form of a table.
- “face” is given as the category of a high-order pattern
- “eye” is given as the category of a first-detected middle-order pattern.
- a middle-order pattern size is given by a channel number k (scale level k) of a middle-order pattern feature consolidation layer 103 (2, m)
- the categories and positions of remaining middle-order patterns that match “face” and “eye” are given as “nose” and r e-n,k , “mouth” and r e-m,k , and “eye” and r e-e1,k and r e-e2,k .
- r denotes a relative position vector with respect to the already-detected middle-order pattern.
- FIGS. 6B-1 to 6 B- 4 illustrate the process of detecting middle-order patterns for a case where a certain middle-order pattern (for example, an eye) included in a high-order pattern (face) is first detected and then other middle-order patterns (eye, nose, and mouth) represented in the form of a tree in FIGS. 6B-1 to 6 B- 4 are detected.
- nodes represented by open circles denote those which have not been detected yet, and nodes represented by solid circles denote those which have already been detected.
- Eye- 1 and eye- 2 denote left and right eyes, respectively.
- the detection state changes from ( 1 ) through ( 4 ) in FIG. 6B .
- it is assumed that one eye, that is, eye- 2 is detected at a predicted position.
- the judgment unit 5 includes a thresholding unit 51 and a detection pattern map information generation unit 52 .
- the thresholding unit 51 performs a thresholding process on the detection level signal of a high-order pattern supplied from the time-sequential consolidation module 4 .
- threshold information is supplied from the control unit 6 . If the detection level signal supplied from the time-sequential consolidation unit 4 is higher than the threshold value, the detection pattern map information generation unit 52 stores information indicating the category and position of the high-order pattern into the memory 7 in which detected pattern map information associated with the entire input data is stored. Alternatively, the information may be supplied to a predetermined display.
- the above-described construction makes it possible to detect the position of a pattern of a specific category from input data (image) using a simple circuit configuration. Furthermore, because the recognition circuit deals with only part of the input data and is capable of detecting both middle-order and high-order patterns, a great reduction in circuit complexity and a greater improvement in efficiency are achieved, compared with the construction in which a plurality of features at a plurality of positions in the input data are detected simultaneously and in parallel.
- the pattern recognition apparatus described above may be disposed on an image inputting device such as a camera or on image outputting device such as a printer or a display.
- an image inputting device such as a camera or on image outputting device such as a printer or a display.
- the pattern recognition apparatus is disposed on an image inputting device, it becomes possible to recognize or detect a specific object and perform focusing, exposure adjustment, zooming, color correction, and/or other processing with respect to an area centered at the detected object, using a small-scale circuit having low power consumption. If the pattern recognition apparatus is disposed on an image outputting device, it becomes possible to automatically perform optimum color correction for a specific subject.
- the pattern detection (recognition) apparatus may be disposed on an imaging apparatus to perform focusing of a specific subject, color correction of a specific subject, and exposure adjustment for a specific subject, as described below with reference to FIG. 13 , which illustrates main parts of the imaging apparatus including the pattern recognition apparatus according to the present embodiment.
- the imaging apparatus 1101 includes an imaging optical system 1102 including an imaging lens and a zooming mechanism, a CCD or CMOS image sensor 1103 , an imaging parameter measuring unit 1104 , an image signal processing circuit 1105 , a storage unit 1106 , a control signal generator 1107 for generating a control signal for controlling an operation of taking an image and controlling an imaging condition, a display 1108 also serving as a viewfinder such as an EVF, a flash lamp 1109 , and a storage medium 1110 . Furthermore, a pattern recognition apparatus capable of performing time division multiplexing processing is provided as an object detection (recognition) apparatus 1111 .
- an object detection (recognition) apparatus 1111 is provided as an object detection (recognition) apparatus 1111 .
- this imaging apparatus 1101 a face image of a person, registered in advance, is detected (in terms of the position and the size) from an image being taken, using the object detection (recognition) apparatus 1111 .
- Information about the position and the size of the person image is supplied from the object detection (recognition) apparatus 1111 to the control signal generator 1107 .
- the control signal generator 1107 generates a control signal on the basis of the output from the imaging parameter measuring unit 1104 to properly control the focus, the exposure, and the white balance with respect to the image of that person.
- the pattern detection (recognition) apparatus in the imaging apparatus in the above described manner, it becomes possible to detect an image of a person and properly control the imaging conditions for the detected image at a high speed (in real time) using a small-sized circuit having low power consumption.
- the sampling point position scanned by the local area scanning unit 1 is changed in accordance with a predetermined procedure (raster scanning procedure), and the block size is fixed (based on the predetermined maximum size of an object to be detected).
- the controlling of the sampling point position during the process does not depend on the output from the local area recognition module 3 .
- the local area recognition module 3 detects a middle-order or high-order pattern.
- the construction of the pattern recognition apparatus is similar to that according to the first embodiment.
- the local area recognition module 3 includes processing channels assigned to different object sizes to detect an object for various different sizes.
- FIG. 8 is a flow chart of a process according to the present embodiment.
- step S 801 the position of a sampling point on input data is set in accordance with a predetermined scanning procedure.
- step S 802 a middle-order pattern at the sampling point position is examined to determine whether it matches a high-order pattern. That is, a middle-order pattern and a corresponding high-order pattern that matches the middle-order pattern are detected.
- the local area recognition unit 3 outputs the detection level (maximum neuron output level of those in the feature consolidation layer) of the middle-order or high-order pattern detected in the scanning process.
- the time-sequential consolidation unit 4 stores, into the primary storage 44 , detection pattern distribution (map) information, the category, the detection level, and the position of the pattern each time such a pattern is detected.
- the stored middle-order pattern data is part of a high-order pattern having a size (greater than the block size) that cannot be detected in a local area with a given size.
- the judgment unit 5 After completion of changing the scanning position over the entire input data, the judgment unit 5 checks the data stored in the primary storage 44 of the time-sequential consolidation unit 4 to judge whether an object image (high-order pattern) is present in an area around the position where the middle-order pattern has been detected (the high-order pattern including that middle-order pattern cannot be detected at the position where the middle-order pattern is detected because of the limitation of the block size). If the high-order pattern (object to be detected) is determined to be present, the position and the category thereof are determined (step S 805 ).
- the process in step S 805 is not a simple thresholding process. As shown in FIG. 14 , the process performed in step S 805 is basically the same as the process performed by the time-sequential consolidation unit 4 in the first embodiment described above. That is, the process is performed as described below while scanning the detection map associated with a middle-order pattern stored in the primary storage 44 .
- step S 8101 high-order pattern categories are input and one of them is selected. Thereafter, in step S 8102 , a next predicted position to jump to in the scanning of the detection map is determined. A category of a feature predicted to be present at that position is also determined.
- the process is performed (steps S 8105 and S 8106 ) in a similar manner to the process performed in steps S 505 and S 506 by the time-sequential consolidation unit 4 according to the first embodiment described earlier with reference to FIG. 5 .
- step S 8107 matching between the middle-order pattern at the predicted position and the high-order pattern is evaluated by performing a simple logical decision (step S 8107 ). Thereafter, the process from step S 8101 to step S 8107 is performed repeatedly until it is determined in steps S 8108 and S 8109 that there are no more middle-order patterns that have not been detected yet and there are no more high-order patterns that have not been evaluated yet. After that, map information associated with a detected high-order pattern is output as a final result (S 8110 ).
- step S 8110 a combination of middle-order patterns which match (in terms of the arrangement) the one of prepared high-order patterns is extracted, and information about the type of the high-order pattern and the position thereof is output.
- the size of the block-shaped local area defined by the scanning unit 1 is controlled by a block setting unit (not shown), and consolidation and recognition are performed by the local area recognition module 3 , the time-sequential consolidation module 4 , and the judgment unit 5 .
- the local area recognition module 3 includes a plurality of parallel processing channels corresponding to different scale levels.
- the block size may be updated according to one of two methods described below. In the first method, the control unit 6 determines the block size at each scanning position, and the local area recognition module 3 outputs data at each scanning position. In the second method, consolidation and recognition are performed by scanning the entire input data while fixing the block size. Thereafter, the block size is changed and consolidation and recognition are performed for the updated block size.
- the local area recognition module 3 detects only a high-order pattern, and thus the data supplied to the time-sequential consolidation unit 4 is output only from the highest-level feature consolidation layer. Except for the above, the process performed by the recognition module 3 is similar to that according to the previous embodiments.
- FIG. 9 is a flow chart of a main process according to the present embodiment.
- step S 901 the sampling point position in the input data is determined in accordance with a predetermined scanning procedure.
- step S 902 setting or changing of the block size is performed in accordance with a predetermined procedure (as described above).
- step S 903 the local area recognition module 3 detects a high-order pattern in a local area.
- step S 904 the detection level of a pattern that matches a prepared high-order pattern is output.
- step S 905 the detection level and the category of the high-order pattern are supplied from the local area recognition module 3 to the time-sequential consolidation module 4 .
- corresponding scanning position information is supplied from the control unit 6 to the time-sequential consolidation module 4 .
- the time-sequential consolidation module 4 generates a high-order pattern detection map and outputs it (to store it into the storage).
- the judgment unit 5 performs a thresholding process (S 906 ) and outputs data indicating the position of the high-order pattern (to be detected) in the input data.
- the difference in the block-shaped local area in the scanning process corresponds to the difference in the processing channel of the local area recognition module 3 described above with reference to the first embodiment. That is, a high-order pattern is detected at respective scanning positions for various sizes.
- FIG. 10 is a diagram illustrating main parts of a fourth embodiment.
- the local area recognition module 3 detects patterns of various different categories by time-sequentially changing the category during the detection process.
- this pattern detection process intermediate results obtained at respective sampling points of the input data are stored in memories 8 0 , 8 1 , . . . , 8 M , and then the intermediate detection results of the respective feature consolidation layers are read from the memories 8 0 , 8 1 , . . . , 8 M and consolidated by the time-sequential consolidation module 4 .
- the local area recognition module 3 hierarchically detects patterns of various orders from low to high using feature detection layers 102 and feature consolidation layers 103 alternately disposed in a cascade arrangement.
- Outputs from the respective feature detection layers 102 are sub-sampled by the feature consolidation layers 103 at respective stages as in the previous embodiments, and the results are temporality stored in memories 8 0 , 8 1 , . . . , 8 M associated with the respective feature consolidation layers ( 103 ) such that different types are stored at different memory addresses. Furthermore, in the feature detection layers 102 , as described below, the synapse weight distribution (local receptive field structure) is changed, and the detection results for the respective feature types are time-sequentially input from the memory 8 .
- the local receptive field structures of the feature detection layer 102 are retained in the form of digital data in a memory such as an SRAM 40 for each feature type, and the local receptive field structures are changed as required in accordance with the data stored in the memory 40 . More specifically, the local receptive field structures can be realized using a dynamically reconfigurable FPGA and using a receptive field control circuit 45 for controlling associated synapse circuit elements.
- the receptive field structure of neurons in a feature detection layer 102 that detects a pattern of an eye at a certain time is changed at another time in accordance with a signal from the control unit 6 and the receptive field control circuit 45 so as to detect another pattern such as a nose or mouth pattern.
- the receptive field structure is determined by data called configuration bits of the FPGA stored in an SRAM (not shown).
- the receptive field structure is time-sequentially changed by the receptive field control circuit 45 by dynamically changing the configuration of the FPGA, that is, by changing the configuration bits stored in the FPGA in accordance with data stored in the memory 40 . That is, the configuration bits serve as data that determines the receptive field structure.
- a memory and a control circuit are needed to change the configuration bits for respective neurons.
- the neural network for the local area recognition module 3 according to the first embodiment described earlier can be realized using one set of configuration bits for each feature detection layer, the memory 40 , and the receptive field control circuit 45 , as described below.
- the local receptive fields of the neurons in the feature detection layer become the same.
- the configuration bits determine only the structure of the logical connections (interconnections). That is, the presence/absence of connection between a neuron and another neuron in a layer at a preceding stage is specified by a configuration bit.
- the weight value associated with each connection is set and changed so as to achieve the receptive field structure by setting and changing the weight data of the synapse circuit in accordance with the weighting data supplied from the memory 40 .
- the synaptic weight for each synaptic is set and changed by injecting as much amount of charge as specified by the weight data stored in the memory 40 . More specifically, the receptive field control circuit 45 reads the synaptic weight data (indicating the voltage to be applied to inject a required amount of charge) from the memory 40 at a specified address, and the receptive field control circuit 45 injects a current into a floating gate element until the specified amount of charge is stored (until the specified voltage is obtained).
- a memory device it is possible to use a memory device to store data corresponding to the weights, if the data can be rewritten quickly enough and if the data can be retained for a period of time as long as required in that device.
- the receptive field structures of respective neurons in the feature detection layer are changed depending on the feature type. However, if the scale level, which is one of the feature types, is not changed, the receptive field structures of neurons in the feature consolidation layer are not changed. Note that specific values of the configuration bits are different from one neuron to another to reflect the difference in the actual interconnection (address) depending on the locations of the neurons in the respective feature detection layers.
- a synapse circuit with a receptive field structure is realized using a 2-dimensional systolic array processor, and the receptive field structure is changed by changing the time-sequential data supplied to the systolic array elements to control pipeline processing (description of the systolic array can be found, for example, in “Parallel Computer Architecture” by Tomita (Shokodo, pp. 190-192, 1986), “Digital Neural Networks” by S. Y. Kung (PTR Prentice Hall, Englewood Clifs, pp. 340-361, 1993), and Japanese Examined Patent Application Publication No. 2741793).
- FIG. 11 is a diagram illustrating main parts of the fifth embodiment.
- the synaptic weight data stored in a memory 40 is time-sequentially supplied to respective synapse circuit elements arranged in a systolic array structure in the feature detection layer 102 and the feature consolidation layer 103 thereby controlling the local receptive field structure dynamically and time-sequentially.
- the synaptic weight may be given, for example, by the amount of charge injected in a floating gate element or stored in a capacitor.
- the respective synapse circuit elements Sk are sequentially accessed, and voltage signals corresponding to the weight data read from the memory 40 are applied to the synapse circuit elements Sk, as in the fourth embodiment described above.
- drastic simplification in the circuit configuration is achieved.
- the outputs from the local area recognition module 3 are consolidated by the consolidation module 4 in synchronization with the timing control signal of the systolic array processor supplied from the control unit 6 , and the judgment unit 5 judges whether there is an object of the specified category.
- the processes performed by the time-sequential consolidation module 4 and the judgment unit 5 are substantially the same as those described earlier in the first embodiment, and thus they are not described herein.
- FIG. 12 is a flow chart illustrating a main process according to the present embodiment.
- the control unit 6 sets feature detection layer numbers (of various orders from low to high) and feature types (categories and sizes) in the respective layers. This setting process is performed in accordance with a predetermined procedure.
- step S 1202 and S 1203 feature data or image data of a specific category with weights depending on the receptive field structure is input to detection modules in the feature detection layer from the memory 8 or the data inputting layer 101 .
- step S 1203 the receptive field control circuit 45 time-sequentially sets the receptive field structure using pipeline data.
- the receptive field structures of respective neurons in the feature detection layer are changed depending on the feature type. However, if the scale level, which is one of the feature types, is not changed, the receptive field structures of neurons in the feature consolidation layer are not changed.
- step S 1204 the outputs from the feature detection layers are sub-sampled (in the feature consolidation layer) for respective feature types, and the results are stored in the memory 8 at different addresses depending on the feature type.
- the process from step S 1201 to step S 1204 is performed repeatedly for respective feature categories and layer numbers. If it is determined in step S 1205 that the process is completed for all feature categories and layer numbers, the process proceeds to step S 1206 .
- step S 1206 the time-sequential consolidation module 4 reads the detection results associated with the respective feature types from the memory 8 and produces a detection map of middle-order or high-order features.
- step S 1207 the judgment unit 5 performs a thresholding process to finally determine whether an object of the specified category is present. If such an object is present, the judgment unit 5 outputs information indicating the position thereof.
- a plurality of features are detected in local areas while scanning input data, and the plurality of features detected in the local areas are integrated to finally detect (recognize) a pattern of a specific category with a specific size. This makes it possible to detect (recognize) a pattern in a highly efficient manner using a very simple circuit.
- the present invention makes it possible to efficiently extract local features (patterns) of specific categories for various different sizes, using a small-scale circuit.
- consolidation of local patterns extracted (detected) at different positions can be easily performed using a simple logic circuit by referring to data representing, in the form of a list with associated data, the configurations of middle-order patterns. This makes it possible to quickly detect a high-order pattern.
- the object can be detected in a highly reliable fashion by detecting low-order patterns or middle-order patterns on the basis of the output from the sensor and integrating them.
- circuit complexity can be greatly reduced by changing the receptive field structure depending on the type of feature to be detected.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Biodiversity & Conservation Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
In a pattern recognition apparatus, a local area recognition module is constructed with operation elements having predetermined operation characteristics. Pattern data of a predetermined size in input data is acquired by time-sequentially performing inputting process at a plurality of times via a local area scanning unit, and information indicating the position of pattern data in the input data is output. The local area recognition module detects a feature of a predetermined middle-order or high-order category from the pattern data. A consolidation module time-sequentially consolidates outputs from the local area recognition module on the basis of the position information and the category of the feature thereby producing feature detection map information. A judgment unit outputs position information and category information of a high-order feature present in the input data, on the basis of the output from the time-sequential consolidation module.
Description
1. Field of the Invention
The present invention relates to a pattern recognition apparatus using a parallel operation.
2. Description of the Related Art
Image/speech recognition techniques can be generally classified into two types. In one type, a recognition algorithm specialized for recognition of a particular type of image/voice is described in the form of computer software and executed sequentially. In the other type, recognition is performed using a dedicated parallel image processor (such as a SIMD or MIMD machine).
One widely-used image recognition algorithm is to calculate a feature value indicating the degree of similarity between an image of an object and an object model. In this technique, model data of an object to be recognized is represented in the form of a template model, and recognition is performed by calculating the degree of similarity between an input image (or a feature vector thereof) and a template or by calculating a high-order correlation coefficient. The calculation may be performed by means of hierarchical parallel processing (Japanese Examined Patent Application Publication No. 2741793).
When the degree of similarity in terms of a local part of an object model is evaluated, if a part of an object is hidden, there is a possibility that difficulty occurs in the evaluation of the degree of similarity. A technique for avoiding such difficulty is disclosed in Japanese Patent Laid-Open No. 11-15495. In this technique, matching between a local part of an object and a local model is evaluated, and the likelihood of presence of the object is calculated for various local parts of the object. In accordance with the Dempster-Shafer technique or the fuzzy technique, the overall likelihood of presence of the image is then determined from the likelihood of presence calculated on the basis of individual local parts, thereby enhancing the reliability of recognition.
Japanese Patent Laid-Open No. 6-176158 discloses a technique in which the degree of similarity of feature vectors of an input pattern with respect to a category is calculated individually for each feature vector, and the overall degree of similarity is determined using the degrees of similarity of respective feature vectors normalized with respect to a maximum degree of similarity. Finally, recognition is performed on the basis of the overall degree of similarity.
Japanese Patent Laid-Open No. 9-153021 discloses a parallel processing apparatus in which an input digital signal is divided into a plurality of parts and the divided parts are processed in parallel by a plurality of processors, wherein division of the input digital signal into the plurality of parts is performed such that the calculation cost is minimized and the performance is optimized depending on the input digital signal.
However, in the technique disclosed in Japanese Patent Laid-Open No. 11-15945, when there are a plurality of categories in object models, it is not disclosed which local model should be employed and how matching results are consolidated. Furthermore, when the overall likelihood of presence of a feature is determined using non-additive measures on the basis of the Dempster-Shafer technique, it is not necessarily ensured that the resultant overall likelihood indicates optimum estimation.
Another problem is that when the size of an object in an image to be recognized is different from that of object model, or when an image includes a plurality of objects with different sizes, the technique encounters difficulty. Recognition may be possible if a plurality of object models corresponding to various sizes are prepared and if the degree of similarity is calculated one by one for all object models corresponding to different sizes. However, this needs a large-scale circuit (large memory size) and the processing efficiency is low.
In the parallel processing apparatus disclosed in Japanese Patent Laid-Open No. 9-153021, if input data includes a plurality of objects with different sizes, it is difficult to properly divide the input data. That is, when the type or size of an object is unknown, if an input signal is simply divided in a fixed manner, parallel processing for pattern recognition cannot be properly performed.
In the pattern recognition apparatus disclosed in Japanese Patent Laid-Open No. 6-176158, the improvement in the memory efficiency and the reduction in the circuit size cannot be achieved. In general, when pattern recognition is performed using a hierarchical parallel processing circuit (using a technique disclosed, for example, in Japanese Examined Patent Application Publication No. 2741793), detection of a plurality of features at sampling point positions on the input data is performed simultaneously and in parallel. Therefore, depending on the size of an input image, a large number of elements are required in a low-level layer, and thus a large-scale circuit is needed.
It is object of the present invention to provide pattern recognition processing capable of efficiently performing recognition using a small-scale circuit for detecting (recognizing) a pattern of a predetermined category and size.
It is another object of the present invention to provide pattern recognition processing capable of efficiently extracting a local feature (pattern) of a specific category using a small-scale circuit, for various sizes of the local feature (pattern).
It is still another object of the present invention to provide pattern recognition processing capable of detecting an object in a highly reliable fashion even when the object to be detected is partially occluded by another object.
According to one aspect, the present invention which achieves these objectives relates to a pattern recognition apparatus comprising time-division data inputting means for inputting data by time-sequentially inputting pattern data, which is part of the input data and which has a predetermined size, a plurality of times; position information inputting means for inputting position information of the pattern data in the input data; feature detection means including an operation element having a predetermined operation characteristic, for detecting a feature of a predetermined middle-order or high-order category from the pattern data; time-sequential consolidation means for time-sequentially consolidating the outputs from the feature detection means on the basis of the position information and the category of the feature and producing feature detection map information; and judgment means for outputting position information and category information of a high-order feature present in the input data, on the basis of the output from the time-sequential consolidation means.
According to another aspect, the present invention which achieves these objectives relates to a pattern recognition apparatus comprising data inputting means for scanning pattern data with a predetermined size, which is part of input data, thereby inputting the pattern data; detection means for detecting a predetermined feature from the pattern data; scanning position changing means for changing, on the basis of the type of the feature, scanning position at which the pattern data is scanned by the data inputting means; consolidation means for consolidating a plurality of features detected at different scanning positions and determining, on the basis of consolidation result, the likelihood of presence of a specific pattern; and judgment means for outputting position information indicating the position of the specific pattern and information indicating the type of the specific pattern, on the basis of the output from the consolidation means.
Other objectives and advantages besides those discussed above shall be apparent to those skilled in the art from the description of a preferred embodiment of the invention which follows. In the description, reference is made to accompanying drawings, which form a part thereof, and which illustrate an example of the invention. Such example, however, is not exhaustive of the various embodiments of the invention, and therefore reference is made to the claims which follow the description for determining the scope of the invention.
Embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Brief Description of the General Construction and Respective Elements
A first embodiment is described in detail below with reference to the accompanying drawings. FIG. 1 generally illustrates a pattern recognition apparatus according to the first embodiment. The pattern recognition apparatus includes a local area scanning unit 1, an image inputting unit 2, a local area recognition module 3, a time-sequential consolidation module 4, a judgment unit 5, and a control unit 6 for controlling the operations of the above units or modules. Functions of the respective units/modules are described below.
In accordance with a control signal supplied from the control unit 6, the local area scanning unit 1 defines, in the data input via the image inputting unit 2, a local area with a rectangular shape (block shape or another shape) having a size determined by the control unit 6, at a sampling point position which is changed one by one. In the block scanning process, it is desirable that a current local area partially overlap with a previous local area so that no reduction in detection accuracy occurs when a feature is present near a boundary of these local areas.
The local area scanning unit 1 outputs a read control signal to the image inputting unit 2 (a sensor such as a CMOS sensor). In response, the image inputting unit 2 reads an image signal from the block-shaped local area and provides the resultant signal to the local area scanning unit 1. The above reading process may be performed in accordance with a known technique (for example, a technique disclosed in Japanese Patent Laid-Open No. 11-196332, filed by the present applicant). In a case where a CCD is used as the sensor, an image is temporarily stored in a frame memory or the like, and then the image stored therein is scanned from a specified block-shaped local area to another.
The local area recognition module 3 includes a hierarchical neural network circuit for detecting geometrical features of various orders from low to high. The local area recognition module 3 receives the data of the block-shaped local area defined above and informs the consolidation module 4 whether the local area includes a middle-order or high-order pattern of a predetermined category.
The time-sequential consolidation module 4 receives position information from the local area scanning unit 1 and consolidates the data, associated with block-shaped local areas at different positions, output from the local area recognition module 3 on the basis of the position information. On the basis of the consolidation result, the time-sequential consolidation module 4 outputs information indicating whether a specific pattern has been detected. If the time-sequential consolidation module 4 obtains a detection signal (position information and category information) of a high-order pattern (of an object to be recognized) from the local area recognition module 3, the time-sequential consolidation module 4 directly transfers the detection information to the judgment unit 5.
In the case where the specific pattern has been detected, the judgment unit 5 checks the output of the time-sequential consolidation module 4 on the basis of a judgment parameter supplied from the control unit 6 and outputs information indicating the position of the detected pattern in the input data and information indicating the category of the detected pattern.
The local area recognition module 3 is described in detail below with reference to FIG. 2 . This module 3 mainly deals with information associated with recognition (detection) of an object feature or a geometric feature in a local area of input data. Basically, the local area recognition module 3 has a structure similar to the convolutional network structure (LeCun, Y. and Bengio, Y., 1995, “Convolutional Networks for Images Speech, and Time Series” in Handbook of Brain Theory and Neural Networks (M. Arbib, Ed.), MIT Press, pp. 255-258), except that reciprocal local connection between layers in the network is allowed (as will be described later). The final output indicates the recognition result, that is, the category of the recognized object and the position thereof in the input data.
A data input layer 101 inputs local area data from a photoelectric conversion device such as a CMOS sensor or a CCD device (image inputting unit 2), under the control of the local area scanning unit 1. Alternatively, the data input layer 101 may input high-order data obtained as a result of analysis (such as principal component analysis or vector quantization) performed by particular data analysis means.
The operation of inputting an image is described below. A first feature detection layer 102 (1, 0), performs, by means of Gabor wavelet transformation or another multiple resolution processing method, detection of a low-order local feature (that may include a color feature in addition to a geometric feature) of an image pattern received from the data input layer 101, for a plurality of scale levels or a plurality of feature categories in the local area centered at each scanning point. To this end, the feature detection layer 102 (1, 0) has a receptive field 105 whose structure corresponds to the type of the feature (for example, in a case where a line segment in a particular direction is extracted as a geometric feature, the receptive field has a structure corresponding to the direction), and the feature detection layer 102 (1, 0) includes neuron elements that generate pulse trains in accordance with the likelihood of that feature's presence.
As is described in detail in U.S. patent application Ser. No. 09/878,269 filed by the present applicant, feature detection layers 102 (1, k) (k≧0) form, as a whole, processing channels for various resolutions (scale levels). For example, when the Gabor wavelet transformation is performed by the feature detection layer 102 (1, 0), a set 104 of feature detection cells, having receptive field structures including Gabor filter kernels having different orientation selectivity for the same scale level, form a processing channel in the feature detection layer 102 (1, 0). Furthermore, feature detection cells in a following layer 102 (1, 1), which receive data output from these feature detection cells in the feature detection layer 102 (1, 0) (and which detect a higher-order feature), belong to the same processing channel as that described above. In following feature detection layers 102 (1, k) (k>1), feature detection cells that receive data output from a set 106 of the feature a set of detection cells that form a particular channel in a feature consolidation layer 103 (2, k−1), which will be described in more detail below, belong to the same channel as that particular channel.
Herein, a Gabor wavelet has a shape obtained by modulating, using a Gaussian function, a sinusoidal wave in a particular direction with a particular spatial frequency. A set of filters is provided to achieve the wavelet transformation, wherein each filter has a similar function shape but is different in principal direction and size. It is known that a wavelet has a localized function shape in the spatial frequency domain and also in the real spatial domain, and that it has minimum joint uncertainty in position and spatial frequency. That is, the wavelets are functions that are most localized in both the real space and frequency space (J, G. Daugman (1985), “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters”, Journal of Optical Society of America A, vol. 2, pp. 1160-1169).
More detailed description of the manner of performing Gabor wavelet transformation using a neural network can be found in a paper by Daugman (IEEE Trans. on Acoustics, Speech, and Signal Processing, vol. 36, pp. 1169-1179, 1988). Although the above paper does not disclose the manner of dealing with a part near a boundary in a local area (manner of retaining coefficients of Gabor wavelet transformation), it is desirable that Gabor wavelet transformation coefficients be multiplied by weighting factors depending on the distance of the local area from the center so as to minimize the influence of deviation of values from the ideal Gabor wavelet transformation coefficients near the boundary. Furthermore, as described below, it is assumed that intermediate results obtained in the scanning process are stored in a predetermined storage, for use in the consolidation process.
In processing channels, processes for different scale levels (resolutions) assigned to the respective channels are performed to detect and recognize features of various orders from low to high by means of the hierarchical parallel processing.
The feature consolidation layer 103 (2, 0) includes neuron elements which output pulse trains and which have predetermined receptive field structures (“receptive field” refers to a range of connection with output elements in the immediately preceding layer, and “receptive field structure” refers to a distribution of connection weights). The feature consolidation layer 103 (2, 0) consolidates the outputs from neuron elements of the same receptive field in the feature detection layer 102 (1, 0) (by means of sub-sampling or the like using local averaging). The respective neurons in a feature consolidation layer have common receptive field structures assigned to that feature consolidation layer.
The following feature detection layers 102 ((1, 1), (1, 2), . . . , (1, M)) and the feature consolidation layer 103 ((2, 1), (2, 2), . . . , (2, M)) each have their own receptive field structures, wherein the feature detection layers 102 (1, 1), (1, 2), . . . , and (1, M) detect different features, and the feature consolidation layers 103 (2, 1), (2, 2), . . . , and (2, M) respectively consolidate the features supplied from the feature detection layer at the preceding stage. The feature detection layers 102 are connected (interconnected) so that the feature detection layers 102 can receive the outputs from the cells, belonging to the same channels, in the feature consolidation layer at the preceding stage. In the feature consolidation layer, sub-sampling is performed, for example, to average the outputs from feature detection cells in local areas (local receptive fields of neurons in the feature consolidation layer) for each of the feature categories.
In the synapse circuit 202, excitatory connection results in amplification of a pulse signal. On the other hand, inhibitory connection results in attenuation of a pulse signal. When information is transmitted using a pulse signal, amplification and attenuation can be achieved by one of amplitude modulation, pulse width modulation, pulse phase modulation, and pulse frequency modulation. In the present embodiment, the synapse circuit 202 is mainly used to perform pulse phase modulation, whereby amplification of a signal is converted to a substantial advance of pulse arrival time corresponding to a feature and attenuation is converted to a substantially delay. That is, the synapitc connection gives arrival position (phase) in time, corresponding to a feature, at a destination neuron. Qualitatively, excitatory connection results in an advance of pulse arrival time with respect to a reference phase, and inhibitory connection results in a delay of pulse arrival time.
In FIG. 3A , each neuron element nj is of the integrate-and-fire type that will be described later and outputs a pulse signal (spike train). A synapse circuit and a neuron element may be combined into a circuit block as shown in FIG. 3C .
Neurons included in respective layers are described below. Each neuron element is based on a model extended from a fundamental neuron model called an integrate-and-fire neuron model. These neurons are similar to the integrate-and-fire neurons in that when the linear sum, in time/space domain, of input signals (pulse train corresponding to action potential) exceeds a threshold value, the neuron fires and outputs a pulse signal.
The operation and the mechanism of the firing of the neurons are not described in further detail herein, because they are not essential to the present invention.
Time-Sequential Consolidation
The operation of the part from the local area recognition module 3 and the time-sequential consolidation module 4 is described in detail below. In the present embodiment, the degree of consistency between a middle-order pattern detected in a local area during the scanning process and a high-order pattern is evaluated in terms of the relative position and the type. In this process, on the basis of the type of a middle-order pattern that is detected first, the type and the position of a middle-order pattern that will be detected next is predicted, and the scanning position is jumped in accordance with the prediction. This makes it possible to detect a pattern more efficiently than can be detected by means of uniform scanning such as raster scanning.
As shown in FIG. 4 , the time-sequential consolidation module 4 includes a high-order pattern map generation unit 41 for generating a map of detection levels (and, if necessary, features and types) of high-order patterns and positions thereof, a middle-order pattern consolidation unit 42 for outputting a predicted position (that will be described later) of a middle-order pattern that will be detected and also outputting a category of a high-order pattern having a highest matching degree, a memory 43 for storing data (e.g., template pattern data) representing a category of a high-order pattern, and a primary storage 44 for storing a predicted position (that will be described later) of a middle-order pattern).
The data output from the local area recognition module 3 to the time-sequential consolidation module 4 includes a high-order pattern (such as a face to be finally recognized), information indicating whether there is a middle-order pattern (such as an eye, nose, or a mouth on the face) that can be an element of the high-order pattern, and information indicating the position of the middle-order pattern.
In a case where a middle-order pattern is detected at a scanning position within a local area and no high-order pattern including the detected middle-order pattern is detected in the local area of the input data, (this can occur when the size of the high-order pattern is greater than the size of the local area), there is a possibility that the middle-order pattern will be consolidated via a time-sequential consolidation process performed by the middle-order pattern consolidation unit 42 in order to detect a high-order pattern having a greater size using the consolidated result.
In order to make it possible to detect both a middle-order pattern and a high-order pattern, a signal output from a neuron in the feature consolidation layer 103 (2, m) responsible for middle-order feature detection and signals output from neurons in the final feature consolidation layer 103 (2, M) for giving detection information of a high-order feature (object to be detected) are supplied to the time-sequential consolidation module 4 via a bus line. In particular, a signal output from a neuron in the feature consolidation layer 103 (2, m) is supplied to both the next feature detection layer 102 (1, m+1) and time-sequential consolidation module 4 via the bus line. Transmission among neurons using a pulse signal may be performed using a technique based on, for example, the AER (Address Event Representation) technique (Lazzaro, et al., 1993, Silicon Auditory Processors as Computer Peripherals, In Tourestzky, D. (ed), Advances in Neural Information Processing Systems 5, San Mateo, Calif., Morgan Kaufmann Publishers).
In a case where no high-order pattern is detected (that is, the detection output level of a high-order pattern is lower than a predetermined threshold value) but only a middle-order pattern element is detected, the prediction unit 46 of the time-sequential consolidation module 4 selects one candidate for a high-order pattern that can include the detected middle-order pattern and predicts, using a method that will be described later, a category and a position (arrangement) of other middle-order pattern that will be detected in the candidate for the high-order pattern.
The middle-order pattern consolidation unit 42 then outputs, to the judgment unit 5, a signal which has a level depending on whether the pattern of the predicted category will be detected at the predicted position (the output level becomes high if the predicted pattern will be detected at the prediction position) and which thus indicates a detection probability (detection likelihood) that a pattern of the predicted category will be detected. The control unit 6 obtains information indicating the position of the predicted middle-order pattern from the time-sequential consolidation module 4 and outputs a sampling point control signal to the local area scanning unit 1 so that the local area scanning unit 1 can next scan a local area centered at the position of the predicted middle-order pattern. This process will be described in further detail later with reference to FIG. 5 .
On the other hand, in a case where the local area recognition module 3 detects, in a local area, a high-order pattern with an output level higher than a predetermined threshold value, the local area recognition module 3 outputs information of the category (detection probability or detection likelihood) and position information of an object detected in that local area to the time-sequential consolidation module 4. The control unit 6 obtains position information of the detected pattern from the local area scanning unit 1 and transfers the position information to the judgment unit 5.
More specifically, if, among the outputs from the feature consolidation layer 103 (2, M) that is the highest layer in the local area recognition module 3, the maximum value of the outputs from neurons of a feature consolidation module belonging to a particular category fNM is greater than a predetermined threshold value, the maximum output of the neuron is supplied, as information indicating the category and position of detected object, to the time-sequential consolidation module 4.
In a case where the local area recognition module 3 detects both a high-order pattern and a middle-order pattern in the same local area (that is, the detection levels of the high-order pattern and the middle-order pattern in the same local area are higher than the predetermined threshold value), the maximum neuron output associated with the high-order pattern is supplied to the high-order pattern map generation unit 41 of the time-sequential consolidation module 4, while, as for the middle-order pattern, the neuron output of the feature consolidation layer 103 (2, m) is supplied to the middle-order pattern consolidation unit 42 via the bus line. Furthermore, in the time-sequential consolidation module 4, the above-described process is performed on both the high-order pattern and the middle-order pattern.
Now, the middle-order pattern consolidation unit 42 of the time-sequential consolidation module 4 is described. The middle-order pattern consolidation unit 42 is a signal processing circuit (so-called middleware) for outputting a predicted category of an undetected middle-order pattern included in a high-order pattern that can include the detected middle-order pattern and also outputting a predicted position thereof near the detected middle-order pattern.
More specifically, on the basis of the class of a specific object (high-order pattern such as a pattern of a face of a human being viewed from front) to be detected and also on the basis of the class of a detected middle-order pattern (a pattern of an element of the object, such as a pattern of an eye), the class of another middle-order pattern (for example, the other eye, a nose, or a mouth), that is, the predicted category and position thereof are determined.
In the present embodiment, for simplification of the circuit configuration, the circuit (prediction unit 46) that performs prediction does not perform a complicated operation associated with stochastic process or the like, rather the circuit is constructed using a logic circuit so as to refer to combinatory list data represented in the form of a dictionary with associated data (indicating the relative position vectors of possible middle-order patterns) and output data.
As shown in FIGS. 6A and 6B , the list data is given in the form of a linked list of middle-order patterns included in a high-order pattern, and associated data represents the distance and direction of each middle-order pattern using relative position vectors.
The predicted position varies depending on the class of the detected middle-order pattern and the type of the processing channel to which the neuron belongs which has the maximum output in the feature consolidation layer 103 (2, m) of the local area recognition module 3. That is, in the present embodiment, the differences in the size of object to be detected and the feature are reflected in the differences in the processing channels. That is, the positions (predicted positions) of middle-order patterns that have not been detected yet vary depending on the size.
The process is now described below for the case in which there are a plurality of high-order patterns to be detected and there is a category of a middle-order pattern that is commonly included in all high-order patterns. In particular, the process performed by the time-sequential consolidation module 4 is described in detail with reference to FIG. 5 .
First, in step S501, category information of high-order patterns that can include, as an element thereof, a middle-order pattern detected by the local area recognition module 3 is read from the memory 43 of the time-sequential consolidation module 4.
Then, in step S502, the category and the position of a middle-order pattern having a high probability of being detected next near the already-detected middle-order pattern are determined for each of the high-order patterns and stored in the primary storage 44.
In step S503, it is determined whether there can be a plurality of undetected middle-order patterns near the predicted position. If it is determined that there can be a plurality of such patterns, a pattern that is closer to the predicted position in a principal scanning direction (for example, to the right, or from upper left to bottom right) is selected (S504).
In step S505, output data indicating the predicted position of the pattern selected by the middle-order pattern consolidation unit 42 is input to the control unit 6 and used by the control unit 6 to control the scanning position. In the above process, the control unit 6 converts the predicted position information into position control data to be used by the local area scanning unit 1 to define the position of the local area. The resultant position control data is supplied to the local area scanning unit 1.
Furthermore, in step S506, the output from the middle-order feature consolidation layer 103 (2, m), which indicates the degree of consistency between a detected middle-order pattern and a candidate for a high-order pattern (the degree of consistency is determined one by one for all high-order pattern candidates), is supplied to the middle-order pattern consolidation unit 42 from the local area recognition module 3.
After the scanning position has been changed, the middle-order pattern consolidation unit 42 of the time-sequential consolidation module 4 acquires, under the control of the scanning unit 1, the recognition result of the local area data (that is the same as that selected in step S504) from the local recognition module 3 and judges the matching with the category of the already-detected middle-order pattern (S507) as described below. In the case where the judgment indicates that the matching is good, it is checked that there is a middle-order pattern which has not been detected yet (S508) and the flow returns to the step S502 if there is. In step S509, the flow returns to the step S501 if there is a high-order pattern which has not been tested yet. Accordingly, the above-described prediction and judgment at the middle-order pattern level is performed repeatedly as long as there is a middle-order pattern that has not been detected yet. Finally, information indicating the category of the high-order pattern judged as having a high degree of matching and the detection level (indicating the detection probability or the detection likelihood) thereof are output to the judgment unit 5 (S510).
The judgment regarding the degree of matching of the middle-order pattern on the basis of the category of the high-order pattern (step S507) is described below. If category-to-configuration correspondence of remaining middle-order patterns that match the category of the high-order pattern and the category of the already-detected middle-order pattern are stored in advance in the form of a table in a memory, it is possible to make judgment by means of a simple logic decision process using a simple logic circuit.
An example of data indicating the correspondence is shown in FIG. 6A . Herein, the data indicating the correspondence is given in the form of a table. In this specific example, “face” is given as the category of a high-order pattern, and “eye” is given as the category of a first-detected middle-order pattern. Herein, if a middle-order pattern size is given by a channel number k (scale level k) of a middle-order pattern feature consolidation layer 103 (2, m), the categories and positions of remaining middle-order patterns that match “face” and “eye” are given as “nose” and re-n,k, “mouth” and re-m,k, and “eye” and re-e1,k and re-e2,k. Herein, r denotes a relative position vector with respect to the already-detected middle-order pattern.
There are two position vectors for the remaining “eye”, because it is impossible, at this stage, to determine whether the detected eye is a right eye or a left eye. It becomes possible to determine whether the detected eye is a right eye or left eye when a pattern corresponding to the remaining eye is detected. In a case where two or more middle-order patterns such as “eye” and “nose” have been already detected, the relative position vectors of remaining middle-order patterns such as “mouth” can be uniquely determined.
Judgment
The construction of the judgment unit 5 is described below with reference to FIG. 7 . The judgment unit 5 includes a thresholding unit 51 and a detection pattern map information generation unit 52. The thresholding unit 51 performs a thresholding process on the detection level signal of a high-order pattern supplied from the time-sequential consolidation module 4. In the case where the threshold value depends on the input data (object to be detected), threshold information is supplied from the control unit 6. If the detection level signal supplied from the time-sequential consolidation unit 4 is higher than the threshold value, the detection pattern map information generation unit 52 stores information indicating the category and position of the high-order pattern into the memory 7 in which detected pattern map information associated with the entire input data is stored. Alternatively, the information may be supplied to a predetermined display.
The above-described construction makes it possible to detect the position of a pattern of a specific category from input data (image) using a simple circuit configuration. Furthermore, because the recognition circuit deals with only part of the input data and is capable of detecting both middle-order and high-order patterns, a great reduction in circuit complexity and a greater improvement in efficiency are achieved, compared with the construction in which a plurality of features at a plurality of positions in the input data are detected simultaneously and in parallel.
The pattern recognition apparatus described above may be disposed on an image inputting device such as a camera or on image outputting device such as a printer or a display. In a case where the pattern recognition apparatus is disposed on an image inputting device, it becomes possible to recognize or detect a specific object and perform focusing, exposure adjustment, zooming, color correction, and/or other processing with respect to an area centered at the detected object, using a small-scale circuit having low power consumption. If the pattern recognition apparatus is disposed on an image outputting device, it becomes possible to automatically perform optimum color correction for a specific subject.
The pattern detection (recognition) apparatus according to the present embodiment may be disposed on an imaging apparatus to perform focusing of a specific subject, color correction of a specific subject, and exposure adjustment for a specific subject, as described below with reference to FIG. 13 , which illustrates main parts of the imaging apparatus including the pattern recognition apparatus according to the present embodiment.
As shown in FIG. 13 , the imaging apparatus 1101 includes an imaging optical system 1102 including an imaging lens and a zooming mechanism, a CCD or CMOS image sensor 1103, an imaging parameter measuring unit 1104, an image signal processing circuit 1105, a storage unit 1106, a control signal generator 1107 for generating a control signal for controlling an operation of taking an image and controlling an imaging condition, a display 1108 also serving as a viewfinder such as an EVF, a flash lamp 1109, and a storage medium 1110. Furthermore, a pattern recognition apparatus capable of performing time division multiplexing processing is provided as an object detection (recognition) apparatus 1111.
In this imaging apparatus 1101, a face image of a person, registered in advance, is detected (in terms of the position and the size) from an image being taken, using the object detection (recognition) apparatus 1111. Information about the position and the size of the person image is supplied from the object detection (recognition) apparatus 1111 to the control signal generator 1107. In response, the control signal generator 1107 generates a control signal on the basis of the output from the imaging parameter measuring unit 1104 to properly control the focus, the exposure, and the white balance with respect to the image of that person.
By using the pattern detection (recognition) apparatus in the imaging apparatus in the above described manner, it becomes possible to detect an image of a person and properly control the imaging conditions for the detected image at a high speed (in real time) using a small-sized circuit having low power consumption.
In this second embodiment, the sampling point position scanned by the local area scanning unit 1 is changed in accordance with a predetermined procedure (raster scanning procedure), and the block size is fixed (based on the predetermined maximum size of an object to be detected). Thus, in the present embodiment, the controlling of the sampling point position during the process does not depend on the output from the local area recognition module 3. As in the previous embodiment, the local area recognition module 3 detects a middle-order or high-order pattern. The construction of the pattern recognition apparatus is similar to that according to the first embodiment.
Of course, high-order patterns to be detected should have a size smaller than the block size. Scanning is performed over the entire input data without changing the block size. As in the first embodiment, the local area recognition module 3 includes processing channels assigned to different object sizes to detect an object for various different sizes.
In the above scanning process, if the detection level of a middle-order or high-order pattern is higher than a predetermined threshold value, then in steps S803 a and S803 b, the local area recognition unit 3 outputs the detection level (maximum neuron output level of those in the feature consolidation layer) of the middle-order or high-order pattern detected in the scanning process. In step S804, the time-sequential consolidation unit 4 stores, into the primary storage 44, detection pattern distribution (map) information, the category, the detection level, and the position of the pattern each time such a pattern is detected.
Herein, the stored middle-order pattern data is part of a high-order pattern having a size (greater than the block size) that cannot be detected in a local area with a given size.
After completion of changing the scanning position over the entire input data, the judgment unit 5 checks the data stored in the primary storage 44 of the time-sequential consolidation unit 4 to judge whether an object image (high-order pattern) is present in an area around the position where the middle-order pattern has been detected (the high-order pattern including that middle-order pattern cannot be detected at the position where the middle-order pattern is detected because of the limitation of the block size). If the high-order pattern (object to be detected) is determined to be present, the position and the category thereof are determined (step S805).
Unlike the previous embodiment, the process in step S805 is not a simple thresholding process. As shown in FIG. 14 , the process performed in step S805 is basically the same as the process performed by the time-sequential consolidation unit 4 in the first embodiment described above. That is, the process is performed as described below while scanning the detection map associated with a middle-order pattern stored in the primary storage 44.
First, in step S8101, high-order pattern categories are input and one of them is selected. Thereafter, in step S8102, a next predicted position to jump to in the scanning of the detection map is determined. A category of a feature predicted to be present at that position is also determined. When a plurality of middle-order patterns included in the high-order pattern can be present near each other, the process is performed (steps S8105 and S8106) in a similar manner to the process performed in steps S505 and S506 by the time-sequential consolidation unit 4 according to the first embodiment described earlier with reference to FIG. 5 .
Furthermore, matching between the middle-order pattern at the predicted position and the high-order pattern is evaluated by performing a simple logical decision (step S8107). Thereafter, the process from step S8101 to step S8107 is performed repeatedly until it is determined in steps S8108 and S8109 that there are no more middle-order patterns that have not been detected yet and there are no more high-order patterns that have not been evaluated yet. After that, map information associated with a detected high-order pattern is output as a final result (S8110).
In step S8110 described above, a combination of middle-order patterns which match (in terms of the arrangement) the one of prepared high-order patterns is extracted, and information about the type of the high-order pattern and the position thereof is output.
When a high-order pattern is detected at a particular position, the judgment described above is not necessary and thus is not performed.
In this third embodiment, the size of the block-shaped local area defined by the scanning unit 1 is controlled by a block setting unit (not shown), and consolidation and recognition are performed by the local area recognition module 3, the time-sequential consolidation module 4, and the judgment unit 5. As in the first embodiment, the local area recognition module 3 includes a plurality of parallel processing channels corresponding to different scale levels. The block size may be updated according to one of two methods described below. In the first method, the control unit 6 determines the block size at each scanning position, and the local area recognition module 3 outputs data at each scanning position. In the second method, consolidation and recognition are performed by scanning the entire input data while fixing the block size. Thereafter, the block size is changed and consolidation and recognition are performed for the updated block size.
In the second method, in many cases, a pattern can be efficiently detected if the block size is sequentially reduced in the subsequent processes. In any case, the local area recognition module 3 detects only a high-order pattern, and thus the data supplied to the time-sequential consolidation unit 4 is output only from the highest-level feature consolidation layer. Except for the above, the process performed by the recognition module 3 is similar to that according to the previous embodiments.
The difference in the block-shaped local area in the scanning process corresponds to the difference in the processing channel of the local area recognition module 3 described above with reference to the first embodiment. That is, a high-order pattern is detected at respective scanning positions for various sizes.
In the present embodiment, as described above, only a high-order pattern in a local area is detected by scanning the input data while controlling the block size in accordance with the predetermined procedure. This makes it possible to construct the respective modules (such as the local area recognition module 3, the time-sequential consolidation module 4, and the judgment unit 5) in a simplified fashion and minimize the power consumption.
That is, for the same local area in the input data supplied from the local area scanning unit 1, the local area recognition module 3 detects patterns of various different categories by time-sequentially changing the category during the detection process. In this pattern detection process, intermediate results obtained at respective sampling points of the input data are stored in memories 8 0, 8 1, . . . , 8 M, and then the intermediate detection results of the respective feature consolidation layers are read from the memories 8 0, 8 1, . . . , 8 M and consolidated by the time-sequential consolidation module 4.
As in the previous embodiments, the local area recognition module 3 hierarchically detects patterns of various orders from low to high using feature detection layers 102 and feature consolidation layers 103 alternately disposed in a cascade arrangement.
Outputs from the respective feature detection layers 102 are sub-sampled by the feature consolidation layers 103 at respective stages as in the previous embodiments, and the results are temporality stored in memories 8 0, 8 1, . . . , 8 M associated with the respective feature consolidation layers (103) such that different types are stored at different memory addresses. Furthermore, in the feature detection layers 102, as described below, the synapse weight distribution (local receptive field structure) is changed, and the detection results for the respective feature types are time-sequentially input from the memory 8. For example, when a pattern of an eye (middle-order pattern) is detected, the local receptive field structure of the feature detection layer 102 is formed such that local receptive field structures corresponding to respective low-order patterns P1, P2, . . . , Pn, which are needed to detect the middle-order pattern, are provided each time output from the feature consolidation layer corresponding to a pattern Pk (k=1, . . . , n) is input from the memory 8.
The local receptive field structures of the feature detection layer 102 are retained in the form of digital data in a memory such as an SRAM 40 for each feature type, and the local receptive field structures are changed as required in accordance with the data stored in the memory 40. More specifically, the local receptive field structures can be realized using a dynamically reconfigurable FPGA and using a receptive field control circuit 45 for controlling associated synapse circuit elements.
For example, the receptive field structure of neurons in a feature detection layer 102 that detects a pattern of an eye at a certain time is changed at another time in accordance with a signal from the control unit 6 and the receptive field control circuit 45 so as to detect another pattern such as a nose or mouth pattern.
In the present embodiment, as described above, when input data is given part by part, different features are detected (recognized) by performing the time division multiplexing process in the local area recognition module 3, thereby achieving a great reduction in circuit complexity compared with the circuit which simultaneously detects a plurality of features at the plurality of sampling positions in the input data by means of parallel operations.
The receptive field structure is determined by data called configuration bits of the FPGA stored in an SRAM (not shown). Thus, the receptive field structure is time-sequentially changed by the receptive field control circuit 45 by dynamically changing the configuration of the FPGA, that is, by changing the configuration bits stored in the FPGA in accordance with data stored in the memory 40. That is, the configuration bits serve as data that determines the receptive field structure.
In general, to realize a neural network including neurons having a local receptive field structure, a memory and a control circuit are needed to change the configuration bits for respective neurons. The neural network for the local area recognition module 3 according to the first embodiment described earlier can be realized using one set of configuration bits for each feature detection layer, the memory 40, and the receptive field control circuit 45, as described below.
If the feature which is to be detected at respective sampling points at a time by the detection layer 102 is limited to one type (feature category or size), the local receptive fields of the neurons in the feature detection layer become the same. As a result, it becomes possible to commonly use the memory 40 and the receptive field control circuit 45 to determine the structure of all local receptive fields. That is, it is possible to time-sequentially change the configuration bits of the respective neurons in the feature detection layers in a simple fashion.
In general, the configuration bits determine only the structure of the logical connections (interconnections). That is, the presence/absence of connection between a neuron and another neuron in a layer at a preceding stage is specified by a configuration bit. The weight value associated with each connection is set and changed so as to achieve the receptive field structure by setting and changing the weight data of the synapse circuit in accordance with the weighting data supplied from the memory 40.
For example, in a case where the synaptic weight is given by the amount of charge injected in a floating gate element or stored in a capacitor, the synaptic weight for each synaptic is set and changed by injecting as much amount of charge as specified by the weight data stored in the memory 40. More specifically, the receptive field control circuit 45 reads the synaptic weight data (indicating the voltage to be applied to inject a required amount of charge) from the memory 40 at a specified address, and the receptive field control circuit 45 injects a current into a floating gate element until the specified amount of charge is stored (until the specified voltage is obtained).
Similarly, the synapse circuit elements Sk (k=1, 2, . . . ) that constitute the receptive field structure are time-sequentially accessed, and charges (hot electrons) are injected by applying a voltage thereto thereby setting the distribution of synaptic weight (receptive field structure). Alternatively, it is possible to use a memory device to store data corresponding to the weights, if the data can be rewritten quickly enough and if the data can be retained for a period of time as long as required in that device.
The receptive field structures of respective neurons in the feature detection layer are changed depending on the feature type. However, if the scale level, which is one of the feature types, is not changed, the receptive field structures of neurons in the feature consolidation layer are not changed. Note that specific values of the configuration bits are different from one neuron to another to reflect the difference in the actual interconnection (address) depending on the locations of the neurons in the respective feature detection layers.
In this fifth embodiment, a synapse circuit with a receptive field structure is realized using a 2-dimensional systolic array processor, and the receptive field structure is changed by changing the time-sequential data supplied to the systolic array elements to control pipeline processing (description of the systolic array can be found, for example, in “Parallel Computer Architecture” by Tomita (Shokodo, pp. 190-192, 1986), “Digital Neural Networks” by S. Y. Kung (PTR Prentice Hall, Englewood Clifs, pp. 340-361, 1993), and Japanese Examined Patent Application Publication No. 2741793).
The outputs from the local area recognition module 3 (outputs from the feature consolidation layers) are consolidated by the consolidation module 4 in synchronization with the timing control signal of the systolic array processor supplied from the control unit 6, and the judgment unit 5 judges whether there is an object of the specified category. The processes performed by the time-sequential consolidation module 4 and the judgment unit 5 are substantially the same as those described earlier in the first embodiment, and thus they are not described herein.
In the following steps S1202 and S1203, feature data or image data of a specific category with weights depending on the receptive field structure is input to detection modules in the feature detection layer from the memory 8 or the data inputting layer 101. In step S1203, the receptive field control circuit 45 time-sequentially sets the receptive field structure using pipeline data. As in the previous embodiment, the receptive field structures of respective neurons in the feature detection layer are changed depending on the feature type. However, if the scale level, which is one of the feature types, is not changed, the receptive field structures of neurons in the feature consolidation layer are not changed.
In step S1204, the outputs from the feature detection layers are sub-sampled (in the feature consolidation layer) for respective feature types, and the results are stored in the memory 8 at different addresses depending on the feature type. The process from step S1201 to step S1204 is performed repeatedly for respective feature categories and layer numbers. If it is determined in step S1205 that the process is completed for all feature categories and layer numbers, the process proceeds to step S1206. In step S1206, the time-sequential consolidation module 4 reads the detection results associated with the respective feature types from the memory 8 and produces a detection map of middle-order or high-order features. In step S1207, the judgment unit 5 performs a thresholding process to finally determine whether an object of the specified category is present. If such an object is present, the judgment unit 5 outputs information indicating the position thereof.
In the present invention, as described above in detail with reference to specific embodiments, a plurality of features are detected in local areas while scanning input data, and the plurality of features detected in the local areas are integrated to finally detect (recognize) a pattern of a specific category with a specific size. This makes it possible to detect (recognize) a pattern in a highly efficient manner using a very simple circuit.
Furthermore, the present invention makes it possible to efficiently extract local features (patterns) of specific categories for various different sizes, using a small-scale circuit.
Furthermore, consolidation of local patterns extracted (detected) at different positions can be easily performed using a simple logic circuit by referring to data representing, in the form of a list with associated data, the configurations of middle-order patterns. This makes it possible to quickly detect a high-order pattern.
Furthermore, even when an object is partially occluded by another object, the object can be detected in a highly reliable fashion by detecting low-order patterns or middle-order patterns on the basis of the output from the sensor and integrating them.
Furthermore, the circuit complexity can be greatly reduced by changing the receptive field structure depending on the type of feature to be detected.
Although the present invention has been described in its preferred form with a certain degree of particularity, many apparently widely different embodiments of the invention can be made without departing from the spirit and the scope thereof. It is to be understood that the invention is not limited to the specific embodiments thereof except as defined in the appended claims.
Claims (48)
1. A pattern recognition apparatus comprising:
data inputting means for inputting image data by time-sequentially inputting blocks of the image data, each of the blocks being a predetermined size;
position information inputting means for inputting position information representing a position of each of the blocks in the input image data;
pattern detection means for detecting a low-order feature pattern of a predetermined category from each block of the input image data;
prediction means for predicting a category and a position of a first low-order feature pattern to be detected on the basis of a second low-order feature pattern which has been detected;
time-sequential consolidation means for time-sequentially consolidating low-order feature patterns detected from a plurality of blocks of the input image data by said pattern detection means on the basis of the position information input by said position information inputting means and the category of each detected feature pattern, the consolidated low-order feature patterns forming a high-order feature pattern, said time-sequential consolidation means comparing a category of the first low-order feature pattern detected at the predicted position with the predicted category and determining the likelihood of presence of the high-order feature pattern formed by the first low-order feature pattern on the basis of a result of the comparison; and
judgment means for judging a position and a category of a high-order feature pattern present in the input image data, on the basis of the output from said time-sequential consolidation means.
2. A pattern recognition apparatus according to claim 1 , wherein said pattern detection means or said time-sequential consolidation means includes storage means for storing a process result.
3. A pattern recognition apparatus according to claim 2 , further comprising size changing means for changing the size of the blocks of pattern data, where said judgment means makes judgment on the basis of results of consolidation for different block sizes.
4. A pattern recognition apparatus according to claim 2 , wherein said pattern detection means includes an operation element for detecting geometrical features with different sizes in the blocks of pattern data.
5. A pattern recognition apparatus according to claim 1 , wherein said data inputting means inputs the blocks of pattern data having a predetermined size by scanning input data.
6. A pattern recognition apparatus according to claim 5 , further comprising scanning control means for changing a scanning position of said data inputting means on the basis of the likelihood of presence of a high-order pattern to be detected, determined by said time-sequential consolidation means.
7. A pattern recognition apparatus according to claim 1 , further comprising control means for time-sequentially changing an operation characteristic of the operation element of said pattern detection means.
8. A pattern recognition apparatus according to claim 1 , wherein said pattern detection means extracts a predetermined local feature at each position in the data.
9. A pattern recognition apparatus according to claim 8 , wherein said time-sequential consolidation means stores a detection result of a local feature together with associated position information into a predetermined primary storage means.
10. A pattern recognition apparatus according to claim 1 , wherein said pattern detection means is parallel processing means including a plurality of operation elements arranged in parallel and connected to each other.
11. A pattern recognition apparatus according to claim 1 , wherein the operation element of said pattern detection means is constructed such that a plurality of feature detection layers and a plurality of feature consolidation layers are alternately disposed and connected in a cascading fashion.
12. A pattern recognition apparatus according to claim 1 , wherein said pattern detection means has the predetermined operation characteristic, said time-sequential consolidation means consolidates the outputs, associated with patterns at a plurality of scanning positions, from said pattern detection means, and said judgment means outputs information indicating the position, in the input data, of a pattern of a specified category, together with information indicating the category.
13. A pattern recognition apparatus according to claim 1 , wherein said time-sequential consolidation means consolidates patterns detected at scanning positions on the basis of the position information, and said judgment means judges whether there is a high-order pattern including the detected patterns.
14. A pattern recognition apparatus according to claim 1 , further comprising control means for controlling the operation characteristic of said pattern detection means so that patterns of different categories with different sizes can be detected in the input pattern data.
15. A pattern recognition apparatus according to claim 1 , wherein the detection map information is information about a position of the pattern and at least one of a type and a detection level of the pattern.
16. An image processing apparatus which controls a process performed on a signal of an image in accordance with a signal which is output, after being processed by a pattern recognition apparatus according to claim 1 , from said pattern recognition apparatus.
17. A pattern recognition apparatus comprising:
data inputting means for inputting image data by scanning the image data of a predetermined size at a plurality of scanning positions;
detection means for detecting a predetermined feature from each of the scanned image data;
prediction means for predicting a category and a position of first scanned image data to be detected on the basis of second scanned image data which has been detected;
scanning position changing means for changing, a scanning position at which the image data is scanned by said data inputting means to the position predicted by said prediction means;
consolidation means for consolidating a plurality of features detected at different scanning positions on the basis of the scanning position of each detected feature, said consolidation means comparing a category of the first scanned image data detected at the predicted position with the predicted category and determining, on the basis of the result, the likelihood of presence of a specific pattern formed by the first scanned image data; and
judgment means for judging the position and the type of the specific pattern, on the basis of the output from said consolidation means.
18. A pattern recognition apparatus comprising:
a data inputting unit for inputting image data by time-sequentially inputting blocks of the image data, each of the blocks being a predetermined size;
a position information inputting unit for inputting position information representing a position of each of the blocks in the input image data;
a pattern detection for detecting a low-order feature pattern of a predetermined category from each block of the input image data;
a prediction unit for predicting a category and a position of a first low-order feature pattern to be detected on the basis of a second low-order feature pattern which has been detected;
a time-sequential consolidation unit for time- sequentially consolidating low-order feature patterns detected from a plurality of blocks of the input image data detected by said pattern detection unit on the basis of the position information input by said position information inputting unit and the category of each detected feature pattern, the consolidated low-order feature patterns forming a high-order feature pattern, said time-sequential consolidation unit comparing a category of the first low-order feature pattern detected at the predicted position with the predicted category and determining the likelihood of presence of the high-order feature pattern formed by the first low-order feature pattern on the basis of a result of the comparison; and
a judgment unit for judging a position and a category of a high-order feature pattern present in the input image data, on the basis of the output from said time-sequential consolidation unit.
19. A pattern recognition apparatus according to claim 18 , wherein said pattern detection unit or said time-sequential consolidation unit includes storage means for storing a process result.
20. A pattern recognition apparatus according to claim 19 , further comprising a size changing unit for changing the size of the blocks of pattern data, where said judgment unit makes judgment on the basis of results of consolidation for different block sizes.
21. A pattern recognition apparatus according to claim 19 , wherein said pattern detection unit includes an operation element for detecting geometrical features with different sizes in the blocks of pattern data.
22. A pattern recognition apparatus according to claim 18 , wherein said data inputting unit inputs the blocks of pattern data having a predetermined size by scanning input data.
23. A pattern recognition apparatus according to claim 22 , further comprising a scanning control unit for changing a scanning position of said data inputting unit on the basis of the likelihood of presence of a high-order pattern to be detected, determined by said time-sequential consolidation unit.
24. A pattern recognition apparatus according to claim 18 , further comprising a control unit for time-sequentially changing an operation characteristic of the operation element of said pattern detection unit.
25. A pattern recognition apparatus according to claim 18 , wherein said pattern detection unit extracts a predetermined local feature at each position in the data.
26. A pattern recognition apparatus according to claim 25 , wherein said time-sequential consolidation unit stores a detection result of a local feature together with associated position information into a predetermined primary storage unit.
27. A pattern recognition apparatus according to claim 18 , wherein said pattern detection unit is parallel processing unit including a plurality of operation elements arranged in parallel and connected to each other.
28. A pattern recognition apparatus according to claim 18 , wherein the operation element of said pattern detection unit is constructed such that a plurality of feature detection layers and a plurality of feature consolidation layers are alternately disposed and connected in a cascading fashion.
29. A pattern recognition apparatus according to claim 18 , wherein said pattern detection unit has the predetermined operation characteristic, said time-sequential consolidation unit consolidates the outputs, associated with patterns at a plurality of scanning positions, from said pattern detection unit, and said judgment unit outputs information indicating the position, in the input data, of a pattern of a specified category, together with information indicating the category.
30. A pattern recognition apparatus according to claim 18 , wherein said time-sequential consolidation unit consolidates patterns detected at scanning positions on the basis of the position information, and said judgment unit judges whether there is a high-order pattern including the detected patterns.
31. A pattern recognition apparatus according to claim 18 , further comprising a control unit for controlling the operation characteristic of said pattern detection unit so that patterns of different categories with different sizes can be detected in the input pattern data.
32. A pattern recognition apparatus according to claim 18 , wherein the detection map information is information about a position of the pattern and at least one of a type and a detection level of the pattern.
33. An image processing apparatus which controls a process performed on a signal of an image in accordance with a signal which is output, after being processed by a pattern recognition apparatus according to claim 18 , from said pattern recognition apparatus.
34. A pattern recognition apparatus comprising:
a data inputting unit for inputting image data by scanning the image data of a predetermined size at a plurality of scanning positions;
a detection unit for detecting a predetermined feature from each of the scanned image data;
a prediction unit for predicting a category and a position of first scanned image data to be detected on the basis of second scanned image data which has been detected;
a scanning position changing unit for changing a scanning position at which the image data is scanned by said data inputting unit to the position predicted by said prediction means;
a consolidation unit for consolidating a plurality of features detected at different scanning positions on the basis of the scanning position of each detected feature, said consolidation unit comparing a category of the first scanned image data detected at the predicted position with the predicted category and determining, on the basis of the result, the likelihood of presence of a specific pattern formed by the first scanned image data; and
a judgment unit for judging the position and the type of the specific pattern, on the basis of the output from said consolidation unit.
35. A pattern recognition method comprising the steps of:
time-sequentially inputting blocks of the image data, each of the blocks being a predetermined size;
inputting position information representing a position of each of the blocks in the input image data;
detecting a low-order feature pattern of a predetermined category from each block of the input image data;
predicting a category and a position of a first low-order feature pattern to be detected on the basis of a second low-order feature pattern which has been detected;
consolidating low-order feature patterns detected from for a plurality of blocks of the input image data by said pattern detection step on the basis of the position information input in said position information inputting step and the category of each detected feature pattern, said consolidating step comparing a category of the first low-order feature pattern detected at the predicted position with the predicted category and determining the likelihood of presence of the high-order feature pattern formed by the first low-order feature pattern on the basis of a result of the comparison; and
judging a position and a category of a high-order feature pattern present in the input image data, on the basis of the output in said consolidation step.
36. A pattern recognition method according to claim 35 , wherein said detection step or said consolidation step includes storing a process result.
37. A pattern recognition method according to claim 36 , further comprising a step of changing the size of the blocks of pattern data, where said outputting step outputs information on the basis of results of consolidation for different block sizes.
38. A pattern recognition method according to claim 36 , wherein said detection step includes detecting geometrical features with different sizes in the blocks of pattern data.
39. A pattern recognition method according to claim 35 , wherein said inputting step inputs the blocks of pattern data having a predetermined size by scanning input data.
40. A pattern recognition method according to claim 39 , further comprising a scanning control step of changing a scanning position in said inputting step on the basis of the likelihood of presence of a high-order pattern to be detected, determined in said consolidation step.
41. A pattern recognition method according to claim 35 , further comprising a control step for time-sequentially changing an operation characteristic in said detection step.
42. A pattern recognition method according to claim 35 , wherein said detection step extracts a predetermined local feature at each position in the data.
43. A pattern recognition method according to claim 42 , wherein said consolidation step further includes storing a detection result of a local feature together with associated position information into a predetermined primary storage unit.
44. A pattern recognition method according to claim 35 , wherein said outputting step further includes outputting information indicating the position, in the input data, of a pattern of a specified category, together with information indicating the category.
45. A pattern recognition method according to claim 35 , wherein said consolidation step further includes consolidating patterns detected at scanning positions on the basis of the position information, and said outputting step judges whether there is a high-order pattern including the detected patterns.
46. A pattern recognition method according to claim 35 , further comprising a control step of controlling an operation characteristic in said detection step so that patterns of different categories with different sizes can be detected in the input pattern data.
47. A pattern recognition method according to claim 35 , wherein the detection map information is information about a position of the pattern and at least one of a type and a detection level of the pattern.
48. A pattern recognition method comprising the steps of:
scanning image data of a predetermined size at a plurality of scanning positions;
detecting a predetermined feature from each of the scanned image data;
predicting a category and a position of first scanned image data to be detected on the basis of second scanned image data which has been detected;
changing a scanning position at which the image data is scanned in said scanning step to the position predicted by said prediction step;
consolidating a plurality of features detected at different scanning positions on the basis of the scanning position of each detected feature, said consolidation means comparing a category of the first scanned image data detected at the predicted position with the predicted category and determining, on the basis of the result, the likelihood of presence of a specific pattern formed by the first scanned image data; and
judging the position and the type of the specific pattern, on the basis of the output from said consolidation step.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2001164510A JP2002358523A (en) | 2001-05-31 | 2001-05-31 | Device and method for recognizing and processing pattern, and image input device |
JP164510/2001(PAT.) | 2001-05-31 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20020181775A1 US20020181775A1 (en) | 2002-12-05 |
US7274819B2 true US7274819B2 (en) | 2007-09-25 |
Family
ID=19007324
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/156,942 Expired - Lifetime US7274819B2 (en) | 2001-05-31 | 2002-05-30 | Pattern recognition apparatus using parallel operation |
Country Status (2)
Country | Link |
---|---|
US (1) | US7274819B2 (en) |
JP (1) | JP2002358523A (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050185835A1 (en) * | 2004-01-29 | 2005-08-25 | Canon Kabushiki Kaisha | Learning method and device for pattern recognition |
US20060228027A1 (en) * | 2001-03-28 | 2006-10-12 | Canon Kabushiki Kaisha | Dynamically reconfigurable signal processing circuit, pattern recognition apparatus, and image processing apparatus |
US20080219516A1 (en) * | 2006-08-30 | 2008-09-11 | Canon Kabushiki Kaisha | Image matching apparatus, image matching method, computer program and computer-readable storage medium |
US20090219405A1 (en) * | 2008-02-29 | 2009-09-03 | Canon Kabushiki Kaisha | Information processing apparatus, eye open/closed degree determination method, computer-readable storage medium, and image sensing apparatus |
US20090324060A1 (en) * | 2008-06-30 | 2009-12-31 | Canon Kabushiki Kaisha | Learning apparatus for pattern detector, learning method and computer-readable storage medium |
US20110158535A1 (en) * | 2009-12-24 | 2011-06-30 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US8352265B1 (en) | 2007-12-24 | 2013-01-08 | Edward Lin | Hardware implemented backend search engine for a high-rate speech recognition system |
US8463610B1 (en) | 2008-01-18 | 2013-06-11 | Patrick J. Bourke | Hardware-implemented scalable modular engine for low-power speech recognition |
US8639510B1 (en) | 2007-12-24 | 2014-01-28 | Kai Yu | Acoustic scoring unit implemented on a single FPGA or ASIC |
US8755611B2 (en) | 2010-08-18 | 2014-06-17 | Canon Kabushiki Kaisha | Information processing apparatus and information processing method |
US8761459B2 (en) | 2010-08-06 | 2014-06-24 | Canon Kabushiki Kaisha | Estimating gaze direction |
US8792725B2 (en) | 2011-09-14 | 2014-07-29 | Canon Kabushiki Kaisha | Information processing apparatus, control method for information processing apparatus and storage medium |
US9053431B1 (en) | 2010-10-26 | 2015-06-09 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
US9251400B2 (en) | 2011-08-26 | 2016-02-02 | Canon Kabushiki Kaisha | Learning apparatus, method for controlling learning apparatus, detection apparatus, method for controlling detection apparatus and storage medium |
US9298984B2 (en) | 2011-11-30 | 2016-03-29 | Canon Kabushiki Kaisha | Object detection apparatus, method for controlling the object detection apparatus, and storage medium |
US9361534B2 (en) | 2011-08-05 | 2016-06-07 | Megachips Corporation | Image recognition apparatus using neural network processing |
US20170091671A1 (en) * | 2015-09-25 | 2017-03-30 | Canon Kabushiki Kaisha | Classifier generation apparatus, classifier generation method, and storage medium |
US20170140247A1 (en) * | 2015-11-16 | 2017-05-18 | Samsung Electronics Co., Ltd. | Method and apparatus for recognizing object, and method and apparatus for training recognition model |
US9852159B2 (en) | 2009-06-18 | 2017-12-26 | Canon Kabushiki Kaisha | Image recognition method and image recognition apparatus |
US9875440B1 (en) | 2010-10-26 | 2018-01-23 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
US10860877B2 (en) * | 2016-08-01 | 2020-12-08 | Hangzhou Hikvision Digital Technology Co., Ltd. | Logistics parcel picture processing method, device and system |
US12124954B1 (en) | 2022-11-28 | 2024-10-22 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100455294B1 (en) * | 2002-12-06 | 2004-11-06 | 삼성전자주식회사 | Method for detecting user and detecting motion, and apparatus for detecting user within security system |
AU2003289116A1 (en) | 2002-12-16 | 2004-07-09 | Canon Kabushiki Kaisha | Pattern identification method, device thereof, and program thereof |
EP2955662B1 (en) | 2003-07-18 | 2018-04-04 | Canon Kabushiki Kaisha | Image processing device, imaging device, image processing method |
DE602004023228D1 (en) * | 2003-12-16 | 2009-10-29 | Canon Kk | Pattern Identification Method, Device, and Program |
JP4217664B2 (en) * | 2004-06-28 | 2009-02-04 | キヤノン株式会社 | Image processing method and image processing apparatus |
JP5008269B2 (en) * | 2005-04-08 | 2012-08-22 | キヤノン株式会社 | Information processing apparatus and information processing method |
WO2007141679A1 (en) | 2006-06-08 | 2007-12-13 | Koninklijke Philips Electronics N.V. | Pattern detection on an simd processor |
JP5018587B2 (en) * | 2008-03-25 | 2012-09-05 | セイコーエプソン株式会社 | Object detection method, object detection apparatus, object detection program, and computer-readable recording medium recording object detection program |
JPWO2010106587A1 (en) | 2009-03-18 | 2012-09-13 | パナソニック株式会社 | Neural network system |
JP5675145B2 (en) * | 2010-03-30 | 2015-02-25 | キヤノン株式会社 | Pattern recognition apparatus and pattern recognition method |
JP5777367B2 (en) * | 2011-03-29 | 2015-09-09 | キヤノン株式会社 | Pattern identification device, pattern identification method and program |
US9042655B2 (en) * | 2011-06-27 | 2015-05-26 | Konica Minolta, Inc. | Image processing apparatus, image processing method, and non-transitory computer readable recording medium |
JP6408055B2 (en) * | 2017-03-22 | 2018-10-17 | 株式会社東芝 | Information processing apparatus, method, and program |
CN108108738B (en) * | 2017-11-28 | 2018-11-16 | 北京达佳互联信息技术有限公司 | Image processing method, device and terminal |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5058184A (en) * | 1985-11-02 | 1991-10-15 | Nippon Hoso Kyokai | Hierachical information processing system |
JPH06176158A (en) | 1992-12-10 | 1994-06-24 | Matsushita Electric Ind Co Ltd | Pattern recognizing device |
US5329594A (en) * | 1991-03-06 | 1994-07-12 | Matsushita Electric Industrial Co., Ltd. | Recognizing and judging apparatus |
US5493688A (en) * | 1991-07-05 | 1996-02-20 | Booz, Allen & Hamilton, Inc. | Pattern categoritzation system having self-organizing analog fields |
US5627943A (en) * | 1993-02-17 | 1997-05-06 | Kawasaki Steel Corporation | Neural network processor including systolic array of two-dimensional layers |
JPH09153021A (en) | 1995-09-26 | 1997-06-10 | Hitachi Ltd | Parallel processor and examination device using this processor |
US5664069A (en) * | 1989-07-10 | 1997-09-02 | Yozan, Inc. | Data processing system |
US5664065A (en) * | 1996-06-17 | 1997-09-02 | The United States Of America As Represented By The Secretary Of The Army | Pulse-coupled automatic object recognition system dedicatory clause |
JP2741793B2 (en) | 1991-10-17 | 1998-04-22 | 川崎製鉄株式会社 | Neural network processor |
JPH1115945A (en) | 1997-06-19 | 1999-01-22 | N T T Data:Kk | Device and method for processing picture and system and method for detecting dangerous substance |
JPH1115495A (en) | 1997-06-23 | 1999-01-22 | Ricoh Co Ltd | Voice synthesizer |
EP0926885A2 (en) | 1997-12-26 | 1999-06-30 | Canon Kabushiki Kaisha | Solid state image pickup apparatus |
US5987170A (en) * | 1992-09-28 | 1999-11-16 | Matsushita Electric Industrial Co., Ltd. | Character recognition machine utilizing language processing |
US6088490A (en) * | 1997-03-28 | 2000-07-11 | President Of Hiroshima University | Apparatus for processing two-dimensional information |
US6185337B1 (en) * | 1996-12-17 | 2001-02-06 | Honda Giken Kogyo Kabushiki Kaisha | System and method for image recognition |
US20020038294A1 (en) | 2000-06-16 | 2002-03-28 | Masakazu Matsugu | Apparatus and method for detecting or recognizing pattern by employing a plurality of feature detecting elements |
US20020159627A1 (en) * | 2001-02-28 | 2002-10-31 | Henry Schneiderman | Object finder for photographic images |
US20020181765A1 (en) | 2001-05-31 | 2002-12-05 | Katsuhiko Mori | Pattern recognition apparatus for detecting predetermined pattern contained in input signal |
US6647139B1 (en) * | 1999-02-18 | 2003-11-11 | Matsushita Electric Industrial Co., Ltd. | Method of object recognition, apparatus of the same and recording medium therefor |
US6687386B1 (en) * | 1999-06-15 | 2004-02-03 | Hitachi Denshi Kabushiki Kaisha | Object tracking method and object tracking apparatus |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US181765A (en) * | 1876-09-05 | Improvement in draft-equalizers | ||
US38294A (en) * | 1863-04-28 | Improvement in ship-building |
-
2001
- 2001-05-31 JP JP2001164510A patent/JP2002358523A/en not_active Withdrawn
-
2002
- 2002-05-30 US US10/156,942 patent/US7274819B2/en not_active Expired - Lifetime
Patent Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5058184A (en) * | 1985-11-02 | 1991-10-15 | Nippon Hoso Kyokai | Hierachical information processing system |
US5664069A (en) * | 1989-07-10 | 1997-09-02 | Yozan, Inc. | Data processing system |
US5329594A (en) * | 1991-03-06 | 1994-07-12 | Matsushita Electric Industrial Co., Ltd. | Recognizing and judging apparatus |
US5493688A (en) * | 1991-07-05 | 1996-02-20 | Booz, Allen & Hamilton, Inc. | Pattern categoritzation system having self-organizing analog fields |
JP2741793B2 (en) | 1991-10-17 | 1998-04-22 | 川崎製鉄株式会社 | Neural network processor |
US5987170A (en) * | 1992-09-28 | 1999-11-16 | Matsushita Electric Industrial Co., Ltd. | Character recognition machine utilizing language processing |
JPH06176158A (en) | 1992-12-10 | 1994-06-24 | Matsushita Electric Ind Co Ltd | Pattern recognizing device |
US5627943A (en) * | 1993-02-17 | 1997-05-06 | Kawasaki Steel Corporation | Neural network processor including systolic array of two-dimensional layers |
JPH09153021A (en) | 1995-09-26 | 1997-06-10 | Hitachi Ltd | Parallel processor and examination device using this processor |
US5664065A (en) * | 1996-06-17 | 1997-09-02 | The United States Of America As Represented By The Secretary Of The Army | Pulse-coupled automatic object recognition system dedicatory clause |
US6185337B1 (en) * | 1996-12-17 | 2001-02-06 | Honda Giken Kogyo Kabushiki Kaisha | System and method for image recognition |
US6088490A (en) * | 1997-03-28 | 2000-07-11 | President Of Hiroshima University | Apparatus for processing two-dimensional information |
JPH1115945A (en) | 1997-06-19 | 1999-01-22 | N T T Data:Kk | Device and method for processing picture and system and method for detecting dangerous substance |
JPH1115495A (en) | 1997-06-23 | 1999-01-22 | Ricoh Co Ltd | Voice synthesizer |
EP0926885A2 (en) | 1997-12-26 | 1999-06-30 | Canon Kabushiki Kaisha | Solid state image pickup apparatus |
JPH11196332A (en) | 1997-12-26 | 1999-07-21 | Canon Inc | Solid-state image pickup device |
US6647139B1 (en) * | 1999-02-18 | 2003-11-11 | Matsushita Electric Industrial Co., Ltd. | Method of object recognition, apparatus of the same and recording medium therefor |
US6687386B1 (en) * | 1999-06-15 | 2004-02-03 | Hitachi Denshi Kabushiki Kaisha | Object tracking method and object tracking apparatus |
US20020038294A1 (en) | 2000-06-16 | 2002-03-28 | Masakazu Matsugu | Apparatus and method for detecting or recognizing pattern by employing a plurality of feature detecting elements |
US20020159627A1 (en) * | 2001-02-28 | 2002-10-31 | Henry Schneiderman | Object finder for photographic images |
US6829384B2 (en) * | 2001-02-28 | 2004-12-07 | Carnegie Mellon University | Object finder for photographic images |
US20020181765A1 (en) | 2001-05-31 | 2002-12-05 | Katsuhiko Mori | Pattern recognition apparatus for detecting predetermined pattern contained in input signal |
Non-Patent Citations (7)
Title |
---|
B. D. Terris, et al., "Near-field Optical Data Storage Using a Solid Immersion Lens," Appl. Phys. Lett., vol. 65, No. 4, 388-390 (Jul. 25, 1994). |
John G. Daugman, "Complete Discrete 2-D Gabor Transforms by Neural Networks for Image Analysis and Compression," IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 36, No. 7, 1169-1179 (Jul. 1988). |
John G. Daugman, "Uncertainty Relation for Resolution in Space, Spatial Frequency, and Orientation Optimized by Two-dimensional Visual Cortical Filters," J. Opt. Soc. Am. A/vol. 2, No. 7, 1160-1169 (Jul. 1985). |
John Lazzaro, et al., "Silicon Auditory Processors as Computer Peripherals," Advances in Neural Information Processing Systems 5, 820-827 (Morgan Kaufmann 1993). |
S. Y, Kung, "Mapping Neural Nets to Array Architectures," Digital Neural Networks, 340-361 (PTR Prentice Hall 1993). |
Yann LeCun and Yoshua Bengio, "Convolutional Networks for Images, Speech, and Time Series," The Handbook of Brain Theory Neural Networks, 255-258 (Michael A. Arbib ed. MIT Press 1995). |
Yasuhiro Ota and Bogdan M. Wilamowski, "Analog Implementation of Pulse-Coupled Neural Networks," IEEE Transactions on Neural Networks, vol. 10, No. 3, 539-544 (May 1999). |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060228027A1 (en) * | 2001-03-28 | 2006-10-12 | Canon Kabushiki Kaisha | Dynamically reconfigurable signal processing circuit, pattern recognition apparatus, and image processing apparatus |
US7512271B2 (en) * | 2001-03-28 | 2009-03-31 | Canon Kabushiki Kaisha | Dynamically reconfigurable signal processing circuit, pattern recognition apparatus, and image processing apparatus |
US20050185835A1 (en) * | 2004-01-29 | 2005-08-25 | Canon Kabushiki Kaisha | Learning method and device for pattern recognition |
US7697765B2 (en) * | 2004-01-29 | 2010-04-13 | Canon Kabushiki Kaisha | Learning method and device for pattern recognition |
US20080219516A1 (en) * | 2006-08-30 | 2008-09-11 | Canon Kabushiki Kaisha | Image matching apparatus, image matching method, computer program and computer-readable storage medium |
US7995805B2 (en) | 2006-08-30 | 2011-08-09 | Canon Kabushiki Kaisha | Image matching apparatus, image matching method, computer program and computer-readable storage medium |
US8639510B1 (en) | 2007-12-24 | 2014-01-28 | Kai Yu | Acoustic scoring unit implemented on a single FPGA or ASIC |
US8352265B1 (en) | 2007-12-24 | 2013-01-08 | Edward Lin | Hardware implemented backend search engine for a high-rate speech recognition system |
US8463610B1 (en) | 2008-01-18 | 2013-06-11 | Patrick J. Bourke | Hardware-implemented scalable modular engine for low-power speech recognition |
US20090219405A1 (en) * | 2008-02-29 | 2009-09-03 | Canon Kabushiki Kaisha | Information processing apparatus, eye open/closed degree determination method, computer-readable storage medium, and image sensing apparatus |
US8130281B2 (en) | 2008-02-29 | 2012-03-06 | Canon Kabushiki Kaisha | Information processing apparatus, eye open/closed degree determination method, computer-readable storage medium, and image sensing apparatus |
US8624994B2 (en) | 2008-02-29 | 2014-01-07 | Canon Kabushiki Kaisha | Information processing apparatus, eye open/closed degree determination method, computer-readable storage medium, and image sensing apparatus |
US8331655B2 (en) | 2008-06-30 | 2012-12-11 | Canon Kabushiki Kaisha | Learning apparatus for pattern detector, learning method and computer-readable storage medium |
US20090324060A1 (en) * | 2008-06-30 | 2009-12-31 | Canon Kabushiki Kaisha | Learning apparatus for pattern detector, learning method and computer-readable storage medium |
US9852159B2 (en) | 2009-06-18 | 2017-12-26 | Canon Kabushiki Kaisha | Image recognition method and image recognition apparatus |
US10891329B2 (en) | 2009-06-18 | 2021-01-12 | Canon Kabushiki Kaisha | Image recognition method and image recognition apparatus |
US20110158535A1 (en) * | 2009-12-24 | 2011-06-30 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US8675974B2 (en) | 2009-12-24 | 2014-03-18 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US8761459B2 (en) | 2010-08-06 | 2014-06-24 | Canon Kabushiki Kaisha | Estimating gaze direction |
US8755611B2 (en) | 2010-08-18 | 2014-06-17 | Canon Kabushiki Kaisha | Information processing apparatus and information processing method |
US10510000B1 (en) | 2010-10-26 | 2019-12-17 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
US9053431B1 (en) | 2010-10-26 | 2015-06-09 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
US11868883B1 (en) | 2010-10-26 | 2024-01-09 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
US11514305B1 (en) | 2010-10-26 | 2022-11-29 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
US9875440B1 (en) | 2010-10-26 | 2018-01-23 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
US9361534B2 (en) | 2011-08-05 | 2016-06-07 | Megachips Corporation | Image recognition apparatus using neural network processing |
US9251400B2 (en) | 2011-08-26 | 2016-02-02 | Canon Kabushiki Kaisha | Learning apparatus, method for controlling learning apparatus, detection apparatus, method for controlling detection apparatus and storage medium |
US8792725B2 (en) | 2011-09-14 | 2014-07-29 | Canon Kabushiki Kaisha | Information processing apparatus, control method for information processing apparatus and storage medium |
US9298984B2 (en) | 2011-11-30 | 2016-03-29 | Canon Kabushiki Kaisha | Object detection apparatus, method for controlling the object detection apparatus, and storage medium |
US11023822B2 (en) * | 2015-09-25 | 2021-06-01 | Canon Kabushiki Kaisha | Classifier generation apparatus for generating a classifier identifying whether input data is included in a specific category based on machine learning, classifier generation method, and storage medium |
US20170091671A1 (en) * | 2015-09-25 | 2017-03-30 | Canon Kabushiki Kaisha | Classifier generation apparatus, classifier generation method, and storage medium |
US10860887B2 (en) * | 2015-11-16 | 2020-12-08 | Samsung Electronics Co., Ltd. | Method and apparatus for recognizing object, and method and apparatus for training recognition model |
US20170140247A1 (en) * | 2015-11-16 | 2017-05-18 | Samsung Electronics Co., Ltd. | Method and apparatus for recognizing object, and method and apparatus for training recognition model |
US11544497B2 (en) * | 2015-11-16 | 2023-01-03 | Samsung Electronics Co., Ltd. | Method and apparatus for recognizing object, and method and apparatus for training recognition model |
US10860877B2 (en) * | 2016-08-01 | 2020-12-08 | Hangzhou Hikvision Digital Technology Co., Ltd. | Logistics parcel picture processing method, device and system |
US12124954B1 (en) | 2022-11-28 | 2024-10-22 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
Also Published As
Publication number | Publication date |
---|---|
US20020181775A1 (en) | 2002-12-05 |
JP2002358523A (en) | 2002-12-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7274819B2 (en) | Pattern recognition apparatus using parallel operation | |
US7028271B2 (en) | Hierarchical processing apparatus | |
EP1164537B1 (en) | Apparatus and method for detecting or recognizing pattern by employing a plurality of feature detecting elements | |
US7743004B2 (en) | Pulse signal circuit, parallel processing circuit, and pattern recognition system | |
US7088860B2 (en) | Dynamically reconfigurable signal processing circuit, pattern recognition apparatus, and image processing apparatus | |
US7707128B2 (en) | Parallel pulse signal processing apparatus with pulse signal pulse counting gate, pattern recognition apparatus, and image input apparatus | |
Hepner et al. | Artificial neural network classification using a minimal training set- Comparison to conventional supervised classification | |
JP2005352900A (en) | Device and method for information processing, and device and method for pattern recognition | |
JP4478296B2 (en) | Pattern detection apparatus and method, image input apparatus and method, and neural network circuit | |
CN116342894B (en) | GIS infrared feature recognition system and method based on improved YOLOv5 | |
US7007002B2 (en) | Signal processing circuit involving local synchronous behavior | |
JP4510237B2 (en) | Pattern detection apparatus and method, image processing apparatus and method | |
CN112001240B (en) | Living body detection method, living body detection device, computer equipment and storage medium | |
JP4314017B2 (en) | Hierarchical processing device | |
JP4532678B2 (en) | Pattern detection apparatus and method, image processing apparatus and method, and neural network apparatus | |
JP4898018B2 (en) | Signal processing circuit and pattern recognition device | |
JP2002358501A (en) | Signal processing circuit | |
WO2002005207A2 (en) | Classifier for image processing | |
Kramer et al. | Neural network clutter-rejection model for FLIR ATR | |
Zhang et al. | Image fusion based on the self-organizing feature map neural networks | |
KR19990081569A (en) | Feature Vector Value Extraction Method for Character Recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CANON KABUSHIKI KAISHA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MATSUGU, MASAKAZU;REEL/FRAME:013108/0717 Effective date: 20020709 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
FPAY | Fee payment |
Year of fee payment: 4 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |