WO2004055735A1 - パターン識別方法、その装置及びそのプログラム - Google Patents
パターン識別方法、その装置及びそのプログラム Download PDFInfo
- Publication number
- WO2004055735A1 WO2004055735A1 PCT/JP2003/016095 JP0316095W WO2004055735A1 WO 2004055735 A1 WO2004055735 A1 WO 2004055735A1 JP 0316095 W JP0316095 W JP 0316095W WO 2004055735 A1 WO2004055735 A1 WO 2004055735A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature
- detection
- unit
- model
- pattern
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 119
- 238000000605 extraction Methods 0.000 claims abstract description 374
- 238000004458 analytical method Methods 0.000 claims abstract description 30
- 238000001514 detection method Methods 0.000 claims description 701
- 239000000284 extract Substances 0.000 claims description 23
- 238000003384 imaging method Methods 0.000 claims description 21
- 230000008859 change Effects 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 11
- 230000005484 gravity Effects 0.000 claims description 8
- 239000000470 constituent Substances 0.000 claims description 4
- 230000000717 retained effect Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 186
- 230000002829 reductive effect Effects 0.000 abstract description 5
- 230000036961 partial effect Effects 0.000 description 101
- 238000010586 diagram Methods 0.000 description 46
- 230000010354 integration Effects 0.000 description 41
- 238000004364 calculation method Methods 0.000 description 32
- 238000012790 confirmation Methods 0.000 description 29
- 210000002569 neuron Anatomy 0.000 description 29
- 230000008569 process Effects 0.000 description 27
- 210000004027 cell Anatomy 0.000 description 25
- 230000006870 function Effects 0.000 description 21
- 230000010365 information processing Effects 0.000 description 21
- 238000005259 measurement Methods 0.000 description 17
- 230000004913 activation Effects 0.000 description 13
- 238000013527 convolutional neural network Methods 0.000 description 12
- 238000001914 filtration Methods 0.000 description 12
- 210000000887 face Anatomy 0.000 description 11
- 238000012935 Averaging Methods 0.000 description 10
- 238000003909 pattern recognition Methods 0.000 description 10
- 238000013528 artificial neural network Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 210000000554 iris Anatomy 0.000 description 7
- 239000013598 vector Substances 0.000 description 7
- 238000010187 selection method Methods 0.000 description 6
- 238000012360 testing method Methods 0.000 description 6
- 230000001815 facial effect Effects 0.000 description 5
- 230000014509 gene expression Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000012937 correction Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 230000000630 rising effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 239000002689 soil Substances 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 240000002627 Cordeauxia edulis Species 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012567 pattern recognition method Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
Definitions
- the present invention relates to a method, an apparatus and a program for identifying a pattern of an input signal.
- a recognition processing algorithm specialized for a specific recognition target is executed by computer software or hardware using a dedicated parallel image processing processor, so that the recognition target and the background are recognized. It is known to detect an object to be recognized from an image including a character.
- Japanese Unexamined Patent Application Publication No. Heisei 9-251 534 describes that a face area is searched for an input image using a template called a standard face. After that, a method of authenticating a person using a partial template for feature point candidates such as eyes, nostrils, and mouth is disclosed. Also, Japanese Patent No.
- Japanese Patent No. 29733667 discloses that the shape data is changed when checking the degree of coincidence between the shape data of each face part and the input image. Is disclosed based on the previously determined positional relationship of parts.
- Japanese Patent Application Laid-Open No. A model is disclosed in which an area model in which a determination element acquisition area is set is moved in an input image, and at each point, the presence or absence of a determination element is determined in the determination element acquisition area to recognize a face.
- Japanese Patent Application Laid-Open No. H11-15973 and "Rotation Invariant Neural Network-Based Face Detection discloses a method in which, in order to cope with the rotation of a subject, the subject is subjected to a coordinate transformation from its center coordinates, and the rotation is converted to a shift to detect the rotation.
- a neural network (Neural Network, hereafter referred to as “NN”) that detects the rotation angle of the face is prepared, and the output angle of the NN is calculated.
- the input image is rotated according to, and the rotated input image is input to the NN that performs face detection.
- the technology described in Japanese Patent No. 2767814 matches the face candidate group in the input image with a previously stored face structure, but the number of faces in the target input image is one. Or limited to a small number.
- the size of the face it is assumed that the input image is an image having a somewhat large size, most of the area in the input image being a face area, and having a small background. If such an input image is used, face candidates are created from all eye and mouth candidate groups. Also, the number of face candidates is limited. However, in the case of images taken with a general camera or video, the size of the face may become smaller or the area of the background may become larger. Will be falsely detected. Therefore, if face candidates are created from all the eye and mouth candidate groups by the method described in Japanese Patent No. 2767814, the number of face candidates becomes enormous, and the processing cost required for matching with the face structure increases. I do.
- the technique described in Japanese Patent No. 29733667 holds the shape data of the iris (eye), the mouth, the nose, etc., first obtains two irises (eyes), and then calculates the mouth, When finding the nose, etc., the search area for face parts such as the mouth and nose is limited based on the position of the iris (eye). In other words, this algorithm does not detect face parts such as the iris (eye), mouth, and nose that make up the face in parallel. Instead, the algorithm first finds the iris (eye) and uses the result to order the mouth, The face part of the nose is detected. This method assumes that there is only one face in the image, and that the iris (eye) is accurately determined. Therefore, if the detected iris (eye) is erroneously detected, the search area for other features such as mouth and nose cannot be set correctly.
- the accuracy of the latter half face detection NN depends on the accuracy of the former stage NN for detecting the rotation angle. Incorrect output makes face detection difficult. If there are multiple subjects in the image and their rotation angles are different, the input image is rotated and converted at multiple rotation angles, and the converted image is input to the face detection NN and the entire image is converted. Since face detection is performed, the processing cost is greatly increased compared to when detecting an image without rotation.
- Japanese Patent Publication No. 7-111819 discloses a dictionary pattern in which the characteristic vectors of the patterns of each class are arranged in descending order of the variance of the vector components.
- a pattern recognition method has been disclosed in which a feature vector is generated from, and matching with dictionary patterns up to the upper N dimensions is performed, and matching is performed with lower dimensions based on the result, thereby reducing processing costs. .
- Japanese Patent Application Laid-Open Publication No. Hei 10-115543 discloses that feature vectors are extracted from input data and classified into class evenings according to the degree of coincidence with the standard vector in each class.
- a pattern recognition dictionary creation device and a pattern recognition device have been proposed that reduce the processing cost of matching by performing category classification based on the degree of coincidence between the category standard vector and the feature vector in the cluster. . Disclosure of the invention
- a pattern extracting method for hierarchically extracting features of input data and identifying a pattern of the input data includes a first feature extracting step of extracting features of a first hierarchy.
- a pattern identification device for hierarchically extracting the features of the input data and identifying the pattern of the input data includes a first feature for extracting the features of the first hierarchy.
- Feature extracting means determining means for determining a method of extracting a feature of a second layer higher than the first layer based on a feature extraction result in the first feature extracting step; and determining by the determining means.
- Second feature extracting means for extracting the feature of the second hierarchy based on the determined method.
- a pattern identification program for causing a computer to hierarchically extract characteristics of input data and identify a pattern of the input data.
- a feature extracting step a determining step of determining a method of extracting a feature of a second layer higher than the first layer based on the feature extraction result in the first feature extracting step, and a determining step.
- FIG. 1A and 1B are diagrams showing a basic configuration of the pattern identification device according to the first embodiment.
- FIG. 2 is a diagram illustrating a functional configuration of the pattern identification device according to the first embodiment.
- FIG. 3 is a flowchart showing a processing flow in the first embodiment.
- FIG. 4 is a diagram showing a face presence image as an identification category in the first embodiment.
- FIG. 5 is a diagram showing four types of initial feature extraction results.
- FIG. 6 is a diagram showing each initial feature extraction result at a position where each local feature to be extracted exists.
- FIG. 7 is a diagram showing a configuration of a basic Convolutional Neural Network.
- FIG. 8 is a diagram illustrating a functional configuration of a pattern identification device according to the second embodiment.
- FIGS. 9A and 9B are flowcharts showing the flow of processing in the second embodiment.
- FIG. 10 is a diagram illustrating a functional configuration of a pattern identification device according to the third embodiment.
- FIGS. 11A and 11B are flowcharts showing the flow of processing in the third embodiment.
- FIG. 12 is a diagram showing a block configuration of a computer for realizing the present invention. ⁇ 2003/016095
- FIG. 13 is a diagram showing a configuration of a pattern detection device according to the fourth embodiment.
- FIG. 14 is a diagram illustrating an example of a feature detected by each feature detection unit in the pattern detection device according to the first embodiment.
- FIG. 15 is a flowchart for explaining an operation example of the putter detection device according to the fourth embodiment.
- FIGS. 16A and 16B are diagrams for explaining a model relating to the right empty V-shaped feature 2-1-1 among the secondary features.
- FIGS. 17A to 17D are diagrams illustrating an example of a rotated detection model for detecting a secondary feature.
- FIGS. 18A and 18B are schematic diagrams illustrating a model selection method in the tertiary feature detection model selection unit 1313.
- FIGS. 19A and 19B are diagrams illustrating an example of an eye detection model for detecting an eye feature in the tertiary feature detection unit 1303.
- FIG. 20 is a block diagram showing a configuration of an imaging device using the pattern detection device.
- FIG. 21 is a block diagram illustrating a configuration of a pattern detection device according to the second embodiment of the present invention.
- FIG. 22 is a flowchart for explaining the operation of the tertiary feature detection model selection unit according to the fifth embodiment.
- FIG. 23 is a schematic diagram for explaining a method of selecting a detection model in the fifth embodiment.
- FIG. 24 is a diagram illustrating a change in the rotation angle of the detection model in each layer in the fifth embodiment.
- FIG. 25 is a block diagram showing the configuration of the pattern detection device according to the sixth embodiment. Five
- FIG. 26 is a diagram showing an outline of two rotation angles ⁇ and 0a ⁇ 0 ⁇ in the sixth embodiment.
- FIG. 27 is a block diagram illustrating a configuration of the pattern detection device according to the seventh embodiment.
- FIG. 28 is a flowchart for explaining the operation of the pattern detection device.
- 29A to 29D are diagrams for explaining an example of a target image for face area detection.
- FIG. 30 is a diagram for explaining an example of “parameters” used for face area detection.
- FIGS. 31 ⁇ and 3IB are diagrams for explaining the difference in the eye characteristic detection model depending on the position in the target image of the limited area detection.
- FIGS. 32A and 32B are diagrams for describing the setting of a confirmation pattern for face area detection.
- FIGS. 33A and 33B are diagrams for describing detection of a character string by the function of the pattern detection device.
- FIG. 34 is a block diagram illustrating a configuration of the information processing device according to the eighth embodiment.
- Fig. 35 is a diagram for explaining the Convolutional neural network structure.
- FIG. 36 is a flowchart for explaining the operation of the information processing apparatus.
- FIG. 37 is a diagram for schematically explaining feature detection weight data in the information processing device.
- FIG. 38 is a block diagram illustrating the configuration of the information processing device according to the ninth embodiment.
- FIG. 39 is a diagram for schematically explaining the size changing function.
- the identification category a face presence image in which the vicinity of the center of the face is almost at the center of the input image as shown in i to iv of FIG. Assuming a face-absence image as an image, a method for identifying one of the above two categories with respect to the input image data will be described.
- an image includes a face
- the present invention is not limited to this, and the present invention is also applicable to other image patterns and cases where the input data is audio data.
- the present embodiment for the sake of simplicity, only a single category of face is identified as to whether or not it is within that category. However, instead of such a single category, a plurality of categories are used. Can also be applied when identifying
- FIG. 1A shows the basic configuration of the pattern identification device.
- an outline of the pattern identification device will be described with reference to FIG. 1A.
- the data input unit 11 in FIG. 1A inputs input data for performing pattern identification.
- the hierarchical feature extraction processing unit 12 is a processing unit that hierarchically extracts features from input data and identifies patterns of input data, and performs a primary feature extraction process. It has a feature extraction processing unit 122 and a secondary feature extraction processing unit 122 that performs secondary feature extraction processing.
- the extraction result distribution analysis unit 13 analyzes the distribution of the feature extraction results extracted by the primary feature extraction processing unit 121. -PT / JP2003 / 016095
- a data input unit 11 inputs data for performing an identification process.
- the input data is subjected to hierarchical feature extraction processing in the hierarchical feature extraction processing unit 12.
- a primary feature extraction processing unit 121 hierarchically extracts a plurality of primary features from input data.
- the distribution of at least one type of primary feature extracted in the primary feature extraction processing section 121 is analyzed in the extraction result distribution analysis section 13 and, based on the analysis result, the secondary The secondary feature extraction is performed in the primary feature extraction processing unit 122.
- FIG. 1B shows another basic configuration of the pattern identification device.
- an outline of the personal identification device will be described with reference to FIG. 1B.
- a data input unit 11 inputs input data for performing pattern identification.
- the hierarchical feature extraction processing unit 12 is a processing unit that performs feature extraction hierarchically from the input data and identifies the pattern of the input data, and performs primary feature extraction processing.
- the processing unit includes a processing unit 121 and a secondary feature extraction processing unit 122 that performs a secondary feature extraction process.
- the extraction result distribution analysis unit 13 analyzes the distribution of the feature extraction results extracted in the primary feature extraction processing unit 121.
- the category-by-category calculation unit 14 is a processing unit that calculates the likelihood of each category of the secondary feature from the analysis result analyzed by the extraction result distribution analysis unit 13. '
- a data input unit 11 inputs a data to be subjected to identification processing.
- the input data is subjected to hierarchical feature extraction processing in the hierarchical feature extraction processing unit 12.
- a primary feature extraction processing unit 121 hierarchically extracts a plurality of primary features from input data.
- the extraction result distribution of at least one type of primary feature extracted in the primary feature extraction processing unit 121 is analyzed in the extraction result distribution analysis unit 13.
- the category-based likelihood calculation unit Based on the results analyzed by the extraction result distribution analysis unit 13, the category-based likelihood calculation unit performs a secondary feature extraction processing unit 1 22. The likelihood of each category of the secondary feature to be extracted is calculated, and the secondary feature extraction processing unit 122 determines that the calculated likelihood is greater than or equal to a predetermined value. Perform feature extraction.
- FIG. 2 shows a functional configuration of the personal identification device according to the present embodiment.
- FIG. 3 shows a processing flow in the present embodiment.
- the solid arrows in FIG. 2 indicate the flow of actual signal data, and the broken arrows indicate not the actual signal data but the flow of command signals such as operation instructions. The same expression is used in FIGS. 8 and 10 described later.
- step S301 image data to be identified is input from the image input unit 21.
- a grayscale image is used as input image data, but an RGB color image or the like may be used.
- the initial feature extracting unit 22 extracts at least one initial feature such as an edge in a specific direction in the input image.
- the local feature extraction unit 23 uses the initial features extracted by the initial feature extraction unit 22 to generate local information such as an edge line segment having a specific length and an end point of the edge line segment. Features are extracted.
- the partial feature extraction unit 24 extracts the partial features such as the eyes and the mouth using the local features extracted by the local feature extraction unit 23.
- step S305 the distribution of the partial features extracted by the partial feature extraction unit 24 in the image is analyzed by the partial feature distribution determination unit 25.
- step S306 the partial feature distribution determination unit 25 issues a start command to the face extraction unit 26 according to the analysis result, and turns on the flag of the face extraction module to be started.
- the face extraction unit 26 is a processing unit that extracts a face using the partial features extracted by the partial feature extraction unit 24.
- the face extraction unit 26 is composed of a plurality of modules that perform face extraction corresponding to a specific size and orientation. Only those modules that have received face extraction.
- the face extraction processing is sequentially performed by the face extraction module whose flag is on, and the flag of the face extraction module that has performed the face extraction is turned off. When there are no more face extraction modules with the flag turned on, the face extraction processing ends.
- the detection result output unit 27 integrates the face extraction results obtained by the face extraction module to determine whether the input image is a face presence image or a face absence image. And output the result.
- the initial features extracted from the input image be features that are constituent elements of the features extracted in the local feature extraction unit 23 that is a higher hierarchy.
- filtering processing is performed at each position of the input image using a differential filter in a vertical direction, a horizontal direction, a diagonally rising right direction, and a diagonally rising left direction, and a vertical edge, a horizontal edge, a diagonal edge Extract four kinds of features such as
- the filtering process is performed as described above.
- the features are extracted by performing template matching at each position of the input image using a template image or the like that shows initial features prepared in advance. It does not matter.
- the features extracted here are stored as information such as the type of the feature, the position in the image, the likelihood of the feature to be extracted, and the feature detection level.
- features as shown in FIGS. 5A to 5D are extracted from the input image.
- a shows the vertical edge extraction result
- b shows the horizontal edge
- c shows the right diagonal edge
- d shows the left diagonal edge extraction result.
- the position where the result of performing the filtering at each position of the image is 0 is gray, the positive value is represented as a high luminance value, and the negative value is represented as a low luminance value. In other words, the brightness value is higher in the image in Fig. 5.
- the positions where the edges are extracted in the direction corresponding to the type of each filter are extracted. Conversely, the position where the luminance value is shown low is the position where the edge in the direction opposite to the direction corresponding to the type of each filter exists.
- the gray part which is an intermediate value of the luminance, indicates a position where no edge is extracted.
- a differential filter is used for feature extraction, so the absolute value of the filtered value indicates the sharpness of the edge. That is, in the input image, the position where the change in the luminance value is large in the direction according to the type of the filter is indicated as a high luminance value or a low luminance value.
- the local features extracted using the initial feature extraction result extracted in the initial feature extraction unit 22 are similar to the features extracted in the initial feature extraction unit 22 in the upper layer. It is desirable that the feature is a component that is a component of a feature extracted in a certain partial feature extraction unit 24.
- the local feature extraction unit 23 since the eyes and the mouth are extracted by the partial feature extraction unit 24, the local feature extraction unit 23 includes a portion surrounded by a circle from (1-1a) to (11-d) in FIG. There are two types of edge line end points, such as the outer corner of the eye, the inner corner of the eye, and both ends of the mouth, as shown in Fig. 2. Extract two types of edge segments with a specific length, which are the features corresponding to the lower part.
- (11a) to (11d) in Fig. 6 are the results of initial feature extraction at the position where the left end point (the left eye's inner corner is shown).
- (1-1a) is the extraction result of vertical edge
- (1-1b) is the horizontal edge
- (l-c) is the diagonal right edge
- (1-d) is the extraction result of diagonal left edge .
- Others (2-a) through (2-d) are the extraction results of each initial feature (vertical, horizontal, right-right diagonal, left diagonal edge) at the position where the right end point (the end point of the mouth in the figure) exists.
- (3-a) to (3-d) are the upper part of the eye and upper lip (the upper part of the right eye is shown), (4-a) to (4-1d) are the lower part of the eye and lower lip (the lower part of the lip is shown) )
- Each initial feature at the position (vertical, horizontal, diagonal right, diagonal left edge) is the extraction result.
- a unique two-dimensional mask is prepared in advance for each feature extracted by the initial feature extraction unit 22, and as shown in FIGS.
- filtering processing composition operation
- each feature is extracted.
- the unique two-dimensional mask prepared in advance is the distribution of the initial feature extraction results at the position where the feature is to be extracted, for example, the feature such as the left end point, that is, (11-a) to (1-d) are supported.
- the distribution of the initial feature extraction results is a distribution of the initial feature extraction results peculiar to the position where the feature to be extracted exists, a two-dimensional mask with a high filtered value is used. Set.
- a method of setting a two-dimensional mask simply provide a plurality of test patterns and, if the given test pattern is a feature to be extracted, set the two-dimensional mask so that the result of filtering becomes high. If the value of each element of the mask is adjusted, and if it is not a feature to be extracted, it may be set by adjusting the value of each element of the two-dimensional mask so that the filtered value becomes a low value. . As another method, the value of each element of the two-dimensional mask may be set by using the knowledge possessed in advance.
- the features extracted by performing the above processing are stored as information such as the type of the extracted feature, the position in the image, the likelihood of the feature to be extracted, and the feature detection level, as in the initial feature extraction unit 22. .
- the positions of the extracted features and a two-dimensional mask unique to each feature are used.
- Each initial features Filing is performed on the features, and the result is integrated and recorded as the likelihood of the feature.
- the processing in the partial feature extraction unit 24 is the same as the processing in the local feature extraction unit 23, and the extraction of a plurality of local features extracted in the local feature extraction unit 23 which is the feature extraction result of the lower layer Extract partial features from the results.
- the partial features to be extracted are desirably the features extracted in the face extraction unit 26 which is the upper hierarchy, that is, the features that are the constituent elements of the face in the embodiment.
- the partial feature extraction unit 24 extracts an eye, a mouth, and the like.
- the extraction process is the same as the extraction method in the local feature extraction unit 23, and the feature may be extracted by filtering using a specific two-dimensional mask.
- eyes and mouths are extracted depending on whether or not a feature having a likelihood of a certain value or more has a specific spatial arrangement relationship. You may.
- the eyes and mouth extracted as described above are also stored as information such as the type of the extracted feature, the position in the image, the likelihood and the feature amount of the feature to be extracted.
- the result of filtering the local feature extraction result using a unique two-dimensional mask for each eye and mouth is integrated at each position in the image, Hold as likelihood.
- the partial feature distribution determination unit 25 performs a simple distribution analysis on the feature extraction result extracted by the partial feature extraction unit 24, and based on the result, the face extraction unit 26 A start instruction is given to one or more face extraction modules.
- the analysis performed here is different from the processing performed by the initial feature extraction unit 22 to the partial feature extraction unit 24, and extracts necessary conditions for each predetermined face extraction module to which an activation instruction is given. For example, in the present embodiment, whether or not eyes are extracted by the processing of the partial feature extraction unit 24 in the vicinity of predetermined coordinates of the input image, PT / JP2003 / 016095
- an analysis to determine whether or not it has been extracted an analysis to determine whether the center of gravity of the mouth extraction result by the processing of the partial feature extraction unit 24 is near predetermined coordinates, or An analysis is performed to determine whether the total eye likelihood as a processing result is equal to or greater than a predetermined value.
- fluctuation refers to a change in a characteristic obtained by, for example, an affinity conversion such as a rotation conversion or a size conversion, or a conversion corresponding to when a face is turned sideways.
- the center of gravity of the mouth extraction result is located at the lower left position from the center of the image, and the center of gravity of the eye extraction result is A condition such as the presence of the center of gravity of the extraction result at the upper right position is set as one of the necessary conditions of the face extraction module corresponding to the clockwise in-plane rotation fluctuation.
- Several such analyzes are performed, and a start command is issued to a predetermined face extraction module that satisfies the conditions of the analysis.
- the analysis of the center of gravity, the analysis of the total likelihood, and the like may be performed within a predetermined range, for example, at a position where an eye is predicted to be present. Further, comparison of two or more features with respect to the cumulative likelihood may be performed.
- the predetermined face extraction module that received the activation instruction from the partial feature distribution extraction unit 25 uses the eye and mouth extraction results extracted by the partial feature extraction unit 24 to extract the partial A feature extraction process similar to that of the feature extraction unit 24 is performed.
- Modules that respond to specific fluctuations include, for example, fluctuations due to size (Fig. 4), fluctuations due to in-plane rotation (Fig. 4 iii), lateral movement of the face (iv in Fig. 4), fluctuation due to vertical vibration, etc.
- a specific two-dimensional mask is prepared in advance for each module corresponding to the above-described variation, and only the module that has received the activation instruction performs the filtering process using the specific two-dimensional mask. .
- the setting of the two-dimensional mask is the same as that described in the local feature extraction unit 23, and a face having a specific variation corresponding to the module is used as a test pattern so as to specialize in the variation corresponding to each module. Set by giving.
- the detection result output unit 27 receives the activation command, performs face extraction processing, and performs final classification of the input image from the result of filtering by the module corresponding to the fluctuation.
- the determination here is not limited to this method, and for example, the final determination may be made by integrating the output values of the activated modules. Specifically, the output value of the module corresponding to the rotation fluctuation in the clockwise plane is given a predetermined weight, and the output value of the module corresponding to the rotation fluctuation in the counterclockwise plane, which is the opposite category of the fluctuation, is given a predetermined weight. In this case, it is possible to reduce erroneous discrimination by suppressing the mutual output between modules whose fluctuations are contradictory.
- the output value of the module corresponding to the face of a specific size is given a predetermined weight by the output value of the module corresponding to the face having a size slightly larger than the specific size, which is a similar category as the variation. And add, etc.
- the threshold for discrimination can be set high, and as a result, erroneous discrimination can be reduced.
- values obtained by weighting and adding the output values of two or more modules of similar categories as described above, or simply performing arithmetic averaging, etc. correspond to intermediate fluctuations between categories.
- the first embodiment uses two-dimensional image data as input data and identifies whether or not the image data belongs to a specific category. Assuming a face-existing image at the center of the image and a face-absent image of the other images, an example of a method to identify which of the above two categories corresponds to the input image data It was explained as.
- a method of detecting a position in an image using a two-dimensional image data as input data will be described.
- processing for detecting a face in an image is performed.
- the present invention is not limited to this.
- Other image patterns and input data may be used as audio data. It is also applicable to cases where It is also applicable to cases where multiple categories of objects are detected.
- a basic configuration of a Convolutional Neural Network (hereinafter referred to as CNN) is used after being changed.
- Figure 7 shows the configuration of a basic CNN. Use Fig. 7 for basic processing of CNN To explain. In FIG. 7, the processing flow is such that the left end is input and the processing is performed in the right direction.
- Reference numeral 71 in FIG. 7 denotes a pixel value distribution corresponding to the luminance value and the like of the input image.
- reference numerals 72, 74, 76, and 78 denote feature detection layers, and each layer includes L7 ⁇ 21, L7 ⁇ 22, L7 ⁇ 23, L7 ⁇ 24, L7 ⁇ 4 L7 ⁇ . 42, L7 • 43, L7 • 44, L7 • 61, L7 • 62, L7-81 are the characteristic detection cell surfaces.
- 73, 75, and 77 are feature integration layers, and L7, 31, L7 ⁇ 32, L7 ⁇ 33, L7 ⁇ 34, L7 '51, L7 ⁇ 52, L7 '53 in each layer L7'54, L7'71, L7'72 are features of the integrated cell surface.
- Each feature detection cell surface in the feature detection layer has a feature detection neuron for detecting a specific feature.
- Each feature detection neuron has a local range corresponding to the position of the feature detection neuron, and the feature detection result of the previous stage layer, that is, if the feature detection neuron in the feature detection layer 74 is L7 ⁇ 31, From the feature extraction results up to L7 ⁇ 34, if the feature detection neurons are in the feature detection layer 72, they are connected to the input image 71 with a unique weight distribution for each feature detection cell plane.
- This weight corresponds to the differential filter for extracting an edge and the two-dimensional mask for extracting a specific feature described in the first embodiment, and as described in the first embodiment.
- Each feature detection neuron performs weighted addition with a predetermined weight on the feature extraction result on each feature cell surface that is the connection destination, or the brightness value of the input image if the feature detection layer 72, and calculates its value.
- Non-linear function such as hyperbolic tangent function 6095
- the feature is detected by using the value converted by the function as the output value of the feature detection neuron.
- each feature detection neuron in L 7 ⁇ 2 1 adds a weighted addition corresponding to the differential filter to the luminance value of the input image. Then, at the position where a vertical edge exists in the input image, the value of the operation result performed by the feature detection neuron in L 7 ⁇ 2 1 becomes large, indicating a high output value, that is, the feature is detected. Will be successful.
- the other feature detection cell surfaces are configured such that the feature detection neuron has a high output value at a position where a specific feature is detected on each feature detection cell surface.
- the nonlinear conversion is generally performed as described above, but the present invention is not limited to this.
- Each feature-integrated cell surface in the feature-integration layer is connected to one feature-detection cell surface of the feature detection layer, which is the previous stage, and is combined with the feature detection result in the previous stage in a local range, It has a feature-integrating neuron that blurs (integrates) the detection result.
- Each feature-integrated neuron basically performs the same operation as the above-described feature detection neuron, except that the weight distribution corresponding to a specific two-dimensional mask is Gaussian Filter or Low-Pass Filter. It is a sign.
- CNN's network is to gradually detect higher-order features from the initial features and finally categorize the inputs.
- the structure is By detecting higher-order features from the input image by the processing described above, a specific image can be detected.
- CNN is characterized by its ability to perform robust discrimination against various pattern variations by hierarchical feature extraction and blurring of the feature integration layer.
- FIG. 8 shows a configuration of a processing unit in the present embodiment. Things.
- FIGS. 9A and 9B show the flow of processing in the present embodiment.
- the process in the present embodiment will be described with reference to FIGS. 8 and 9A and 9B.
- the image input unit 801, the initial feature extraction unit 802, the local feature extraction unit 803, and the partial feature extraction unit 804 in FIG. 8 correspond to the image input unit 21, the initial feature extraction unit 804 in the first embodiment, respectively. This is the same as the feature extraction unit 22, the local feature extraction unit 23, and the partial feature extraction unit 24.
- the processing in steps S901 to S904 is the same as the processing in steps S301 to S304 in FIG.
- an RGB color image is used in the image input unit 801, and an RGB color image converted to a grayscale image is used as an input of the initial feature extraction unit 802 in the next layer.
- the above-described processing by CNN is used for feature extraction, and each feature extraction unit performs feature detection by the feature detection layer and integration of the features detected by the feature integration layer.
- the types of features extracted by the local feature extraction unit 803 and the partial feature extraction unit 804 are the same as those in the first embodiment.
- a unique weight distribution for each feature-detecting cell surface for detecting each of the features can be obtained by inputting a plurality of test patterns as input, similarly to the method of setting a unique two-dimensional mask described in the first embodiment. And use the one set by learning.
- the feature extracted by the initial feature extraction unit 801 does not use a previously defined feature, and the error backpropagation method is used when learning the feature detected by the local feature extraction unit 802.
- the weight distribution specific to each feature detection cell surface for detecting a local feature is learned, and the weight distribution specific to each feature cell surface for detecting an initial feature is automatically set.
- the initial feature extraction unit 801 extracts the features that constitute the local features detected by the local feature extraction unit 802 and are necessary to detect the local features.
- the connection weight distribution with the input image 71 can be automatically set. 2003/016095
- step 905 the same processing as the above feature extraction method is performed by the first face extraction unit 805 on the eyes and mouth extraction results extracted by the partial feature extraction unit 804. And extract the faces in the image.
- the face candidate presence determination unit 806 determines that a face candidate exists there (step S 906).
- the number of face candidates is set to Count (step S907), and the coordinates of the face candidate existence positions determined to have face candidates are sequentially output, and the skin color region extraction unit 807 and partial feature distribution determination A start command is issued to the unit 808 (step S908).
- the skin color region extraction unit 807 receives the activation command from the face candidate presence determination unit 806, and extracts a skin color region from the input image in a range based on the face candidate presence position coordinates (step S909) ).
- the partial feature distribution determination unit 808 determines the distribution of the partial feature extraction result in the range based on the face candidate existence position coordinates (Step S910), and, similarly to the first embodiment, activates the face to be activated.
- the flag of the extraction module is turned on (step S911).
- the partial feature distribution determination unit 808 of the present embodiment differs from the partial feature distribution determination unit 25 of the first embodiment in that not only the feature extraction result in the partial feature extraction unit 804 but also skin color region extraction Utilizing the skin color region extraction results of the unit 807, the analysis of the simple distribution of those feature extraction results is performed, and the second is composed of a face extraction module corresponding to multiple variations.
- This is a processing unit that issues a start command to the face extraction unit 809.
- one face extraction module in the present embodiment corresponds to one feature detection cell surface in the CNN.
- the second face extraction unit 809 performs face extraction by a face extraction module corresponding to the variation, as in the first embodiment. That is, the face extraction module with the flag turned on sequentially performs face extraction processing at the coordinates of the face candidate existence position, and turns off the flag of the face extraction module that has performed face extraction (step S91). 1 to 9 14).
- the face extraction processing in the present embodiment differs from the first embodiment in that not only the eye / mouth feature extraction results extracted by the processing in the partial feature extraction unit 804 but also the local feature extraction unit 803
- the feature extraction result corresponding to the upper part of the eyes and upper part of the lips extracted in and the skin color area extraction result extracted by the skin color area extraction unit 807 is used to extract a face corresponding to the specific variation.
- the detection result output unit 810 outputs a result indicating where the face is in the input image based on the face extraction result in the second face extraction unit 809. That is, the output results of each module are integrated (step S914), the detection result at the face candidate existence position is output (S916), and the process loops to the detection at the next face candidate existence position (step S91). 9 17 to 9 18).
- the face extraction processing performed by the first face extraction unit 805 is the same as the feature extraction processing performed by the local feature extraction unit 803 and the partial feature extraction unit 804.
- the face extraction here is different from the face extraction unit 26 in the first embodiment in that it does not have a plurality of face extraction modules corresponding to fluctuations, but is composed of only one module. Also, in the present embodiment, unlike the first embodiment, in order to detect where the face is in the image, the face is not extracted only near the center of the image. Perform face extraction.
- the unique weight distribution combined with the partial feature extraction result extracted by the partial feature extraction unit 804 of each face detection neuron used in the extraction process is a face with various fluctuations, that is, i to iv in FIG.
- the setting is performed by learning given faces with various fluctuations as test data, as shown in Fig. 1. By learning in this way, the accuracy is low, such as the possibility of determining non-faces as faces is high, but it is possible to extract faces with various fluctuations with a single module .
- feature detection is performed using the weight distribution learned as described above, and the results are integrated by the feature integration layer.
- the face candidate presence determination unit 806 determines a portion that is an output equal to or greater than a predetermined threshold value with respect to the result of the face extraction processing in the first face extraction unit 805. Then, assuming that a face candidate exists at the determined position, the processing of the skin color partial feature distribution determining unit 807 and the partial feature distribution determining unit 808 is performed within the range where the candidate exists. Issue a start command.
- the skin color region extraction unit 807 receives the activation command from the face candidate presence determination unit 806 and extracts a skin color region near the range where the face candidate exists.
- a skin color region In this embodiment, in a region where a skin color region is extracted, an RGB color input image is converted into an HSV color system, and only pixels in a specific hue (H) range are extracted as a skin color region. I do.
- the method for extracting the skin color region is not limited to this, and any other generally known method may be used. For example, the extraction may be performed using the saturation (S) and the luminance (V). Further, in the present embodiment, the skin color area is extracted, but other than that, a hair area or the like may be extracted.
- the partial feature distribution determining unit 808 performs the same processing as the partial feature distribution determining unit 25 in the first embodiment.
- the activation instruction is received from the face candidate presence determination unit 806, and the distribution of the predetermined feature extraction result is analyzed in the vicinity of the range where the face candidate exists. I do.
- a predetermined face extraction module of the second face extraction unit 809 composed of a plurality of face extraction modules corresponding to specific fluctuations is selected, and the face extraction is performed at the face candidate existence position.
- a start command is given to perform processing.
- the feature extraction results analyzed by the partial feature distribution determination unit 806 are the eye and mouth extraction results extracted by the partial feature extraction unit 804, and the skin color region extraction results by the skin color region extraction unit 807. .
- the analysis performed here is the same as that described in the first embodiment, and each module corresponding to the fluctuation, which constitutes the second face extraction unit 809, should satisfy when there is a face Perform processing to extract necessary conditions. 6095
- the skin color region extraction result is used, some analysis of the result will be described.
- the simplest example is to analyze the area of the extracted flesh-tone area.
- the aspect ratio of the region extracted as the skin color is analyzed, and the relative position of the center of gravity of the upper half skin color extraction region and the center of gravity of the lower half skin color extraction region of the region where the face candidate is determined to be present.
- the positional relationship may be analyzed.
- the first example is one of the requirements for a specific size face extraction module, depending on its area.
- the second example can be set as one of the requirements for a module that supports horizontal and vertical swing of the face
- the third example can be set as one of the requirements for a module that supports in-plane rotation of the face.
- the area of the area where the eyes are extracted is compared with the area of the skin color area, or the area where the eyes are not extracted.
- An analysis may be performed such as a comparison between the area of the skin color region and the area of the skin color region, and a comparison between the area of the region where no eyes are extracted and the area of the skin color region.
- the analysis may be performed only in a specific area as described in the first embodiment.
- the area of a non-skin color area may be analyzed in an area considered to be the position of the hair.
- the second face extraction unit 809 is a processing unit similar to the face extraction unit 26 of the first embodiment, and is composed of a plurality of face extraction modules corresponding to specific variations.
- the present embodiment unlike the first embodiment, not only the eye and mouth extraction results by the partial feature extraction unit 804 but also the skin color extraction results by the skin color region extraction unit 807 and the first face extraction unit
- the face is extracted at the face candidate location using the feature extraction result corresponding to the part.
- the feature extraction result of the immediately preceding hierarchy not only the feature extraction result of the immediately preceding hierarchy, but also the feature extraction results in the same hierarchy (here, the first face extraction result) and the outside of the hierarchical feature extraction framework
- the inserted feature extraction result here, the skin color region extraction result
- the feature extraction result in the hierarchy before the immediately preceding hierarchy here, the feature extraction result corresponding to the upper part of the eyes and the upper lip
- the accuracy of feature extraction can be improved by using the feature extraction result of the later-stage layer, which will be described in the form, as a supplementary feature at the time of feature extraction.
- the processing cost is increased, but the second face extraction unit 800 is used only at the position where the face candidate exists in the module that has received the activation instruction from the partial feature distribution determination unit 808. Since the nine feature extraction processes are performed, the increase in processing costs can be minimized.
- the detection result output unit 810 is a processing unit similar to the detection result output unit 27 in the first embodiment, and constitutes a second face extraction unit 809. From the result of performing the feature extraction process in response to the activation command from the partial feature determination unit 808 of the module, it is determined at which position in the image the face is located, and the result is output. Also in this case, as described in the first embodiment, highly accurate detection can be performed by integrating outputs of a plurality of modules.
- the second embodiment has described an example in which two-dimensional image data is used as input data and a face is detected in a method of detecting a specific target in the image.
- the third embodiment of the present invention is a modification of the second embodiment.
- processing for detecting a face in an image is performed.
- the present invention is not limited to this, and other image patterns and audio data are detected.
- FIG. 10 shows the configuration of the processing unit in the present embodiment.
- FIGS. 11A and 11B show the flow of processing in this embodiment.
- the configuration of the basic processing of this embodiment is the same as that described in the second embodiment.
- the processing in this embodiment will be described with reference to FIG.
- step S910 of the second embodiment Since it is exactly the same as that of 9909, its description is omitted.
- the partial feature distribution determining unit 108 is also the same as the partial feature distribution determining unit 808 in the second embodiment, but the second face extraction is performed in accordance with the analysis result of the distribution of the feature extraction result.
- the part 1 010 is composed of a partial feature extraction module that gives a start instruction to the face extraction module corresponding to a plurality of variations to perform a face extraction process at the face candidate existence position and that supports a plurality of variations. Also, a start command is issued to the second partial feature extraction unit 101. That is, the distribution of the partial feature extraction result in the range based on the face candidate existence position coordinates is determined (step S111), and the flag of the activated face extraction module is turned on (step S111).
- the second partial feature extracting unit 101 is composed of a plurality of modules for extracting a partial feature corresponding to a specific variation, and receives a start instruction from the partial feature distribution determining unit 1008.
- the partial feature is re-extracted only at the specific position determined by the face candidate existence position of the module that received the activation instruction.
- the partial feature extraction processing is performed at the position determined by the face candidate existence position coordinates (steps S111 13-1). 1 1 4).
- the second face extraction unit 109 is a processing unit substantially similar to the second face extraction unit 809 of the second embodiment.
- the partial feature extraction unit 100 4 Face extraction is performed using the extracted features.
- the face extraction module with the flag turned on performs face extraction at the position where the face candidate exists, and turns off the flag of the face extraction module that has executed the face extraction (steps S111-5-111). 6).
- the detection result output unit 11010 is exactly the same as the detection result output unit 8100 of the second embodiment, and Steps S1117-1 to 1120 are performed in Step S11 of the second embodiment. The description is omitted because it is exactly the same as 915 to 918. .
- the partial feature distribution determination unit 1008 is similar to the second embodiment in the process of analyzing the distribution of the partial feature extraction result.
- a start instruction is issued to a module that extracts a face corresponding to a plurality of changes here. Further, a partial feature corresponding to the change of the face extraction module that has issued the start instruction. Also, a start instruction is issued to the second partial feature extraction unit 1011, which extracts the data. Specifically, for example, when a start command is issued to start the face extraction module corresponding to the clockwise in-plane rotation fluctuation, at the same time, the partial feature extraction module corresponding to the same clockwise in-plane rotation fluctuation , A start command is issued.
- the second partial feature extraction unit 101 1 is configured by a plurality of modules for extracting a partial feature corresponding to a plurality of fluctuations.
- a partial feature extraction module corresponding to a module for extracting a face corresponding to a plurality of fluctuations which has received a start instruction from the partial feature distribution determination unit 1008, is activated, and a face candidate exists. Partial features are extracted only in a specific range determined by the face candidate existence position obtained as a result of the determination unit 1006.
- the feature extraction method is the same as that described in the second embodiment.
- Each partial feature module basically corresponds to each of the face extraction modules constituting the second face extraction unit. It is not necessary. For example, a feature extraction module corresponding to the front face extraction module may not exist, or a partial feature extraction module may not exist. In such a case, if the activation command is issued to the face extraction module facing the front, the processing in the second partial feature extraction unit 101 1 may not be performed.
- one partial feature extraction module may correspond to a plurality of types of face extraction modules.
- a face extraction module that responds to in-plane rotation fluctuations of 15 degrees clockwise and a face extraction module that responds to in-plane rotation fluctuations of 30 degrees clockwise include both fluctuations in one module.
- the corresponding partial feature extraction module that performs the extraction includes both fluctuations in one module.
- a feed pack mechanism that controls the operation of the feature extraction module at the lower hierarchical level based on the output of the feature extraction result at the upper hierarchical level is introduced.
- the partial feature extraction module corresponding to the face extraction module that responds to a specific change, which is activated in the second face extraction more accurate feature extraction is possible. become.
- the processing cost increases by re-extracting the features, but the processing is performed only at a specific position of the module that received the activation instruction, so the increase in processing cost can be minimized.
- the processing unit does not extract the mouth, but only extracts the eyes corresponding to the fluctuation. If you want more accurate feature extraction For example, a mouth corresponding to the variation may be extracted, or a type of feature other than the feature extracted by the first partial feature extraction unit 104 may be extracted. Further, the feature extraction here is different from the first partial feature extraction unit 1004.
- the partial feature extraction unit Eye extraction is also performed using the partial feature extraction result of eyes, mouth, etc. extracted in 104 and the first face extraction result extracted in first face extraction 105.
- the feature extraction result in the same layer, which is the feature at the same level, and the feature extraction result of the upper layer, which is the feature at the higher level are supplementarily used. This enables more accurate feature extraction processing.
- the second face extraction unit 1109 basically performs the same processing as the second face extraction unit 809 in the second embodiment.
- the difference from the second face extraction unit 809 in the second embodiment is that the second partial feature extraction unit 1011, which corresponds to the activated face extraction module and corresponds to the variation
- the face is not extracted using the partial feature extraction result extracted in the first partial feature extraction unit 1004, but the face is extracted by the second partial feature extraction unit 1011.
- face extraction is performed using the partial feature extraction results corresponding to the extracted fluctuations.
- the extraction result of the mouth is the extraction result in the first partial feature extraction unit 1004. Is used.
- the second partial feature extraction unit 1011 for example, if there is no partial feature extraction module corresponding to the frontal face extraction module, When the activation instruction is issued, the feature is not re-extracted in the second partial feature extracting unit 101.
- the feature extraction result of the first partial feature extraction unit 1004 may be used as it is.
- the activated face extraction When the partial feature extraction corresponding to the variation corresponding to the output module is performed, the eye extraction result extracted by the first partial feature extraction unit 1004 is not used. However, in order to further improve the accuracy, this feature extraction result is used. May also be used as a supplement.
- the third embodiment is a modification of the second embodiment, in which two-dimensional image data is used as input data, and a face is detected by a method of detecting a specific target in the image. This has been described as an example.
- FIG. 12 is a diagram illustrating an example of a block configuration of an information processing device that implements the present invention. As shown in the figure, this information processing device is composed of a CPU 1201, ROM 1202, RAMI 203, HD disk, single disk) 1204, CD 1205, KB (keypad) 1206, CRT 1207, camera 1208, The network interface (I / ⁇ ) 1209 is configured to be communicably connected to each other via a path 1210.
- the CPU 1201 controls the operation of the entire information processing apparatus, and controls the entire information processing apparatus by reading and executing a processing program (software program) from an HD (hard disk) 1204 or the like.
- the ROM 1202 stores a program, various data used in the program, and the like.
- the RAMI 203 is used as a work area or the like for temporarily storing a processing program and information to be processed for various kinds of processing in the CPU 1201.
- the HD 1204 is a component as an example of a large-capacity storage device, and stores various data such as model data, a processing program transferred to the RAMI 203 or the like when various processes are executed, and the like.
- the CD (CD drive) 1205 has a function of reading data stored in a CD (CDR) as an example of an external storage medium and writing data to the CD.
- a keypad 126 is an operation unit for a user to input various instructions to the information processing apparatus.
- the CRT 122 displays various instruction information to the user and various information such as character information or image information.
- the camera 128 captures and inputs an image to be identified.
- the interface 1209 is used to acquire information from the network and to transmit information to the network.
- FIG. 13 is a diagram showing a configuration of a pattern detection device according to a fourth embodiment of the present invention.
- 1300 is a signal input section
- 1301 is a primary feature detection section
- 1131 is a primary feature detection filter setting section
- 1302 is a secondary feature detection section.
- 1312 is the secondary feature detection model setting unit
- 1303 is the tertiary feature detection unit
- 1313 is the tertiary feature detection model selection unit
- 1323 is the tertiary feature detection model holding
- 1304 denotes a fourth-order feature detection unit
- 1314 denotes a fourth-order feature detection model selection unit
- 1324 denotes a fourth-order feature detection model holding unit.
- the features of each order shown above indicate local features detected locally, and the features of the higher order include features of the lower order.
- Fig. 14 shows examples of features detected by the primary to quaternary feature detectors 1301 to 4 respectively.
- the signal input unit 1300 inputs a signal to be processed such as an image signal (eg, image data).
- the primary feature detection unit 1301 performs a process for detecting a primary feature, which will be described later, on the signal input from the signal input unit 1300, and outputs the detection result to the secondary feature detection unit 1.
- the primary feature detection filter setting unit 1311 sets the characteristics of the filter for detecting the primary feature in the primary feature detection unit 13100.
- the secondary feature detection unit 1302 uses the detection model set by the secondary feature detection model setting unit 1312 for the result detected by the primary feature detection unit 1301, A process for detecting a secondary feature, which will be described later, is performed, and the detection result is passed to the tertiary feature detection unit 133 and the tertiary feature detection model selection unit 1313.
- the secondary feature detection model setting unit 1312 sets a model that indicates the positional relationship between the two primary features that is used when the secondary feature detection unit 1302 detects secondary features. I do.
- This model has an attribute related to a predetermined shape, and a plurality of models may be prepared from the beginning.
- one rotation angle is set as one parameter and one model is set.
- the model may be created by performing a rotation-affine transformation or the like.
- the secondary feature is described as a model indicating the positional relationship between two primary features. However, even if there are three or more, the same can be applied.
- the tertiary feature detection unit 1303 uses the detection model selected by the tertiary feature detection model selection unit 1313 to compare the result detected by the secondary feature detection unit 1302 Perform processing to detect the third-order feature, and pass the detection result to the fourth-order feature detection unit 1304 and the fourth-order feature detection model selection unit 1314.
- the tertiary feature detection model holding unit 1323 holds a plurality of models having different rotation angles (that is, different inclinations) selected by the tertiary feature detection model selection unit 1313. Then, the tertiary feature detection model selector 1 3 13 3 generates a model indicating the positional relationship between the respective secondary features used when detecting the feature in the tertiary feature detector 1 303.
- the model is selected and set from the models stored in the feature detection model storage unit 1323 based on the detection result from the secondary feature detection unit 1302.
- the fourth-order feature detection unit 1304 uses the detection model selected by the fourth-order feature detection model selection unit 1314 to detect the result detected by the third-order feature detection unit 1303 as described below. Performs the process of detecting the fourth-order feature and outputs the detection result. Further, the fourth-order feature detection model holding unit 1324 holds a plurality of models having different rotation angles (that is, different inclinations) selected by the fourth-order feature detection model selection unit 1314. Then, the quaternary feature detection model selection unit 1 3 1 4 generates a model indicating the positional relationship between the tertiary features used when the quaternary feature detection unit 1 304 detects a feature. Based on the detection result from the tertiary feature detection unit 1303, the model is selected and set from among the models stored in the feature detection model storage unit 1324.
- the pattern detection device detects a predetermined pattern in an image input from the signal input unit 1303 using a detection model for each dimension that is a pattern model.
- the pattern detection apparatus includes a detection model holding unit (for each dimension) that holds an upper model (for example, a three-dimensional feature detection model) configured by combining a predetermined lower model (for example, a two-dimensional feature detection model).
- the three-dimensional feature detection model holding unit 1332) is compared with the above-described lower model and the component part of the pattern in the image, and the dimension of each dimension for calculating the feature amount for the component part of the lower model is compared.
- a feature detection unit for example, a two-dimensional feature detection unit 1302 is compared with the upper model held in the detection model holding unit and the pattern in the image, and a pattern model of the pattern (for example, a three-dimensional Setting unit for setting a feature detection model), and when each lower model constituting the higher model has a predetermined feature amount, the higher model is set as a pattern model of the pattern.
- Setting unit for example, 1 3 0 3 3 dimensional feature detection unit, characterized in that it comprises a.
- the pattern detection device includes a detection unit (for example, a primary feature detection unit 13) that detects a partial feature (for example, a primary feature) of the pattern from the image input from the signal input unit 1303. 0 1), and a lower model setting unit (for example, a two-dimensional detection model setting unit 1312) for setting the lower model (for example, a two-dimensional feature detection model) using a predetermined partial model.
- a detection unit for example, a primary feature detection unit 13
- a partial feature for example, a primary feature
- a partial feature for example, a primary feature
- a lower model setting unit for example, a two-dimensional detection model setting unit 1312
- the feature detecting unit such as the two-dimensional feature detecting unit 1302 includes a part included in the lower model. The feature is to compare the model with the partial features of the pattern in the image and calculate the feature amount.
- the above-mentioned pattern detecting device holds a higher-order model (for example, a four-dimensional detection model) formed by combining a plurality of higher-order models (for example, a three-dimensional detection model) (for example, 4
- the dimensional feature detection model storage unit 1 3 2 4) is compared with the model and a predetermined pattern in the image, and when all of the plurality of higher-order models have the predetermined feature amount, the model is determined.
- Means for setting a pattern model of a predetermined pattern for example, a four-dimensional feature detection unit 1344), and setting a pattern model of the predetermined pattern in the image using a model having a hierarchical configuration.
- FIG. 15 is a flowchart for explaining an operation example of the pattern detection device according to the fourth embodiment.
- an image is used as an input signal and an operation of detecting a face area in the image is taken as an example. The operation will be described.
- an image signal is input to the image input section 130 (step S201).
- a primary feature for example, an edge component having directionality
- a primary feature detection unit 1301 is detected at each position of the input image in the primary feature detection unit 1301 (step S202).
- FIG. 14 is a diagram illustrating an example of features detected by each of the feature detection units (primary to quaternary feature detection units 1301 to 4) in the pattern detection device according to the fourth embodiment. That is, as shown in FIG. 14, the primary feature detection unit 1301 includes vertical features 111, horizontal features 1-2, right-up diagonal features 131, and right-down diagonal features 114. Detect different four-way component features. In this embodiment, the primary feature is described as a feature in the above four directions, but this is merely an example. Alternatively, other features may be used as primary features for detection of secondary features and thereafter.
- the setting of the filter used to detect the four features is determined by the primary feature detection filter setting unit 13 in FIG. Done in 1 1.
- Such feature detection can be performed by performing an enhancement process using a filter that enhances edge components in each direction, for example, a Soverl filter, a Gabor function, or the like.
- edge enhancement processing independent of direction may be performed in a Laplacian filter or the like, and processing may be performed to further enhance features in each direction.
- a plurality of these feature detection filters may be prepared from the beginning, or may be created by the primary feature detection filter setting unit 1311 with the direction as a parameter.
- the detection result in the primary feature detection unit 1301 is output as a detection result image having the same size as the input image for each feature. That is, in the case of the primary feature as shown in FIG. 14, four detection result images having four types of feature components in each of vertical, horizontal, and oblique directions are obtained. Then, the primary feature amount (for example, the number of pixel values equal to or more than a certain value included in the image), which is the value of each position of the detection result image relating to each feature, is checked, and the It can be determined whether the feature exists.
- the primary feature amount for example, the number of pixel values equal to or more than a certain value included in the image
- the secondary features are detected by a secondary feature detection unit 1302, a tertiary feature detection unit 1303, and a quaternary feature detection unit 1304 described below. Some examples of secondary, tertiary and quaternary features are also shown.
- the secondary features are V-shaped right vacant feature 2-1-1-2-1-4, V-shaped left vacant feature 2-2-1-2-2-4, horizontal parallel Line feature 2 — 3—1 to 2—3—4, vertical parallel line feature 2—4—1 to 2—4—4.
- the names of these features are determined when the face is upright with respect to the image, and the names of the features and the respective features in the actual image are determined by rotating the face.
- the orientation may be different. That is, in the present embodiment, for example, the setting unit of the lower model represented by the secondary feature detection model setting unit 1311 rotates each of the lower models having the same shape at a plurality of angles. It is characterized in that a plurality of lower models are set.
- eye features 3-1-1 to 3-1-4 and mouth features 3-2-1 to 3-2-4 there are shown eye features 3-1-1 to 3-1-4 and mouth features 3-2-1 to 3-2-4.
- a face feature 411-1-1 to 411- -4 and an inverted face feature 4-12-1 are shown.
- reverse face features corresponding to face features 41-1-2 to 4-11-4 also exist as quaternary features.
- the primary feature detecting unit 1301 detects four types of primary features at each position by the processing of step S202, and then performs the secondary feature detecting unit.
- a secondary feature is detected (step S203).
- a case where the right empty V-shaped feature 2-1-1 shown in FIG. 14 is detected will be described. However, other cases can be similarly realized.
- FIGS. 16A and 16B are diagrams for explaining a model relating to the right empty V-shaped feature 2-1-11 among the secondary features.
- this right-open V-shaped feature 2-1-11 has a primary feature, an upward-sloping diagonal feature 1-3, at the top, and a downward-sloping diagonal feature 1-4. Present at the bottom.
- the detection result of the primary feature obtained in step S 202 is used. It is only necessary to find the position where the downward-sloping oblique feature 1--4 exists, and the right-open V-shaped feature 2-1-1 exists at that position. In this manner, secondary features can be detected by combining a plurality of types of primary features.
- the size of the face in the image is not a fixed size, and the size of the eyes and mouth varies from person to person, and the eyes and mouth open and close, so the size of the V-shape changes and rotates.
- the error caused by extraction processing of edges etc. T / JP2003 / 016095
- Etc. may also occur. Therefore, in the present embodiment, a right empty V-shaped detection model 400 as shown in FIG. 16B is considered. Then, in the right empty V-shaped detection model 400, 403 is defined as a diagonally upward-sloping region, and 404 is defined as a diagonally downward-slant region. Then, among the primary features obtained in step S202 in the upward-sloping diagonal region 400, only the upward-slant diagonal 1-3 exists, and in the downward-slant diagonal region 400, only the downward-slant diagonal 114 exists. If there exists, it is assumed that there is a V-th order feature 2-1-1 at the right. By doing so, it is possible to perform robust processing against a change in size or shape and rotation.
- the present invention when the center of an image having a right-up diagonal feature exists in the right-up diagonal region 4003 in FIG. 16B, and a right-down diagonal feature exists in the right-down diagonal region 400 If the center of the image exists, the right empty V-shaped feature 2-1 shall exist. It should be noted that the present invention is not limited to the case where the center of the image exists as described above. For example, the image may exist when the entire image having the primary feature is included in each region.
- the upward-sloping oblique region 403 and the downward-sloping oblique region 404 are not limited to the rectangular shape as shown in FIG. 16B, but may be any shape. This is the same for other areas.
- FIGS. 17A to 17D are diagrams showing an example of a rotated detection model for detecting a secondary feature. For example, consider a secondary feature detection model in which the four types of secondary feature detection models shown in Figure 17A are rotated 45 degrees counterclockwise and divided into four groups. Fig.
- FIG. 17A is a set of detection models for detecting the secondary features of a face that has been rotated almost 0 and 180 degrees when the front erect face is 0, and Fig. 17B is the same.
- a group of detection models for detecting the secondary features of the face rotated by approximately 90 degrees and one hundred and ninety degrees Figure 17C also shows the secondary features of the face rotated by approximately 45 degrees and -135 degrees
- FIG. 17D shows a detection model group for detecting the secondary features of the face rotated by approximately 144 degrees and 135 degrees.
- 1-1 to 1-4 indicate regions including images having primary features of the same reference numerals shown in FIG.
- the detection model groups shown in Figs. 17A to 17D have the V-shaped right-opening feature 2—1—1, the V-shaped left-opening feature 2—2_1, and the horizontal parallel line feature 2—3— 1, and the vertical parallel line feature 2—4–1 Consists of four types of detection models for detecting four types of secondary features, and the number of each detection model is detected by that detection model.
- the secondary features shown in Fig. 8 are shown.
- the names of these V-shaped right-sided features, V-shaped left-sided features, horizontal parallel line features, and vertical parallel line features are based on when the face is upright. . Therefore, for example, in Fig. 17A, the horizontal parallel line feature shows two lines extending in the horizontal direction as shown in 2-3-1 and matches the name.
- the name of the horizontal parallel line feature indicates that, as shown in 2-3-2, two lines that actually extend in the vertical direction To show.
- the name of the feature may not correspond to the shape indicated by the actual feature due to the rotation.
- the rectangular areas indicated by reference numerals 1-1 to 1-4 in FIGS. 17A to 17D are areas where the primary features detected in step S202 exist.
- the reference numerals and the features assigned to each area are the same as those of the primary features shown in FIG. That is, when only the primary feature indicated by the number exists in these rectangular areas, there is a feature detected by the detection model. Therefore, using all these detection models 2003/016095
- the secondary features can be detected even for a rotated (tilted) face.
- the setting of the secondary feature detection model is performed by the secondary feature detection model setting unit 1312 in FIG.
- a plurality of such detection models may be prepared from the beginning, and for example, the face rotated about 0 degrees or 180 degrees shown in FIGS. 17A to 17D may be used.
- a detection model for detecting the next feature is prepared, and a process of rotating transformation and changing the type of the primary feature to be detected is performed on these models, so that the secondary feature detection filter setting unit 1 3 1 1 May be created.
- the secondary feature detection unit 1302 detects secondary features using the set detection model. That is, the detection of the secondary features is performed using the values of the primary features constituting the secondary features, and whether or not the value of the primary feature of each region set by the detection model is equal to or greater than the threshold value. You can judge. For example, a case will be described in which a right empty V-shaped feature is detected as a secondary feature at a predetermined position using a right empty V-shaped detection model 2-1-11 for 0 degrees. In this case, as shown in Fig.
- the maximum value of the right-upward diagonal feature 1-3 existing in the right-upward diagonal region 4 03 is higher than the threshold value, and the right-down diagonal region 4
- the position value is, for example, an average of the maximum values.
- the detection result obtained in this way is output as a detection result image having the same size as the input image for each secondary feature.
- inspection of each feature By looking at the value of each position in the output result image, it can be determined whether each secondary feature in each rotation direction exists at that position in the input image.
- the feature is that the primary feature is not detected again in each area of the secondary feature detection model. That is, in the detection of the V-shaped right vacant feature 2-1-1, which is one of the secondary features, the upper right skewed area and the lower right sloping area are again detected in the upper right diagonal area and the lower right diagonal area. It does not detect the feature 1-3 and the downward slanting feature 1-1.
- the detection of these primary features has already been completed in step S202, and in step S203, a threshold is used to determine whether or not each primary feature exists in those regions. Just judge. Then, when it is determined that a plurality of primary features exist in each area, processing is performed to determine that a secondary feature exists at that position.
- the processing method for detecting this feature is the same for the tertiary feature and the quaternary feature. This makes it possible to reduce the processing cost.
- the tertiary feature detection model selection unit 1313 selects a tertiary feature detection model (step S204).
- a tertiary feature detection model For example, consider detecting an eye feature (reference numeral 3—1— :! to 3—1—4 in FIG. 14) from the secondary features detected in step S203.
- FIGS. 19A and 19B are diagrams illustrating an example of an eye detection model for detecting an eye feature in the tertiary feature detection unit 1303.
- Fig. 19A is for detecting the eye features (code 3-11 1 shown in Fig. 14) with a rotation of almost 0 or 180 degrees when the face is upright as 0 degrees.
- Figure 7 shows an eye detection model 700. Eye features with a rotation of almost 0 degrees or 180 degrees are V-shaped right-opening features, which are secondary features with 0-degree rotation, 2-1-1 are on the left side, and V-shaped features on the left are 2--2. It can be detected by satisfying the combination where 1 is on the right and horizontal parallel features 2-3-1 and vertical parallel features 2-4-1 are in the middle of those V-shaped features.
- the eye detection model 7 0 0 is also a right empty V-shaped feature 2—1-1 1 to detect the right empty V-shaped area 7 0 1 on the left side, and a left empty V-shaped feature 2—2—11
- a horizontal parallel region 703 that detects horizontal parallel line feature 2-3-1 and a vertical parallel region 704 that detects vertical parallel line feature 2-4-1 exist in the middle of these V-shaped regions It holds.
- FIG. 19B shows an eye detection model 710 for detecting an eye feature (reference numeral 3-1-2 in FIG. 14) whose rotation is approximately 90 degrees or 190 degrees.
- An eye feature with a rotation of approximately 90 degrees or 90 degrees is a right-handed V-shaped feature, which is a secondary feature of 90-degree rotation.
- — 2-2 can be detected by filling the combination that is on the bottom and horizontal and vertical parallel line features 2-3 — 4-2 are in the middle of those V-shaped features.
- the eye detection model 7110 also has a right empty V-shaped area 7 1 1 for detecting the right empty V-shaped feature 2-1-2, and an empty left V-shaped area 2-2-2 for detecting the left empty V-shaped feature 2-2-2
- the V-shaped area 7 1 2 is on the lower side, and the horizontal and vertical parallel line features 2_3 to 4 1-2 are detected.
- the horizontal parallel area 7 13 and the vertical parallel area 7 1 4 are in the middle of those V-shaped areas It is possible to exist. Incidentally, 45 degrees and 135 degrees can be realized in the same manner.
- the tertiary feature detection unit 1303 uses the tertiary feature detection based on the detection result of the secondary feature detected in step S203.
- the secondary feature detection model is selected by the tertiary feature detection model selection unit 13 13.
- the secondary features of all the rotation angles detected in step S203 are set to 2—1 1 1 to 2 —
- 4-4 it is also possible to detect the tertiary features 3-1--1 to 3-2-4 at all rotation angles shown in Fig.14.
- that method significantly increases the computational cost.
- the tertiary feature model used for the detection is selected based on the detection result of the secondary feature detected in step S203, and the tertiary feature detection model selecting unit 13
- the pattern detection device includes, in the tertiary feature detection unit 1303, a higher-order model that is compared with the pattern based on the feature amount of the lower-order model calculated by the secondary feature detection unit 1302. It further comprises a three-dimensional feature detection model selection unit 1313 for limiting the number of (three-dimensional feature detection models). The same applies to the fourth-order feature detection model selection unit 1314.
- FIGS. 17A to 17D are schematic diagrams illustrating a method of selecting a model in the tertiary feature detection model selection unit 1313.
- the graph in Fig. 18A shows the detection result value (correlation value) of the secondary feature at a certain position
- the horizontal axis shows the rotation angle when the erected time is 0 degree
- the vertical axis shows the correlation value.
- the range of the correlation value is from 0 (no correlation) to 1 (maximum correlation).
- the horizontal axis shows the results of secondary features rotated by -45, 45, and 90 degrees with 0 degrees interposed. This is because, as shown in FIGS. 17A to 17D, the rotation angle at the time of detecting the secondary feature is set at every 45 degrees.
- the maximum Sn from angles satisfying Sn> S th is set to Sp, and the angle 0 at that time Select p. Then, when the second largest Sn is S q, and S Q> k ⁇ S p is satisfied, the angle ⁇ q at that time is also selected. Furthermore, the third largest Sn is defined as Sr, and when Sr> k '* SQ is satisfied, the angle 0r at that time is also selected.
- the correlation value exceeds the threshold value, and the angle 0p in the case of the maximum correlation value Sp is selected.
- the second correlation value is higher than 70% (0.7 Sp) of the maximum correlation value Sp, that is, if Sq;> 0.7 Sp, the second correlation value is higher.
- the angle of the correlation value of is also selected.
- the correlation value at this time is SQ.
- the third correlation value is higher than 70% (0.7 Sq) of the second correlation value, that is, if Sr> 0.7 SQ, the third correlation value Also select the angle.
- the correlation value at this time is S r.
- the rotation angle of the tertiary feature to be detected is selected by the above-described selection method. Therefore, when there is no angle exceeding the threshold value, the selected angle is 0. When there is an angle exceeding the threshold value, the rotation angle selected based on the distribution of the correlation value of each angle. And its number is determined. Then, a detection model corresponding to the selected rotation angle is selected.
- a correlation value is used. May be used to select a predetermined number of models at higher angles. The selection process in this case is performed by the tertiary feature detection model selection unit 1313 of the pattern detection device shown in FIG. 13, and the selected detection model is stored in the tertiary feature detection model storage unit 1323.
- FIG. 18A shows the correlation value of the secondary feature amount at a certain position for each rotation angle.
- the tertiary feature detection unit 1303 detects the tertiary feature using the tertiary feature detection model set in step S204 (step S205).
- the method of detecting each tertiary feature is the same as that in step S203, and each secondary feature detected in step S203 exists in the detection area of the detection model selected in step S204.
- the tertiary feature is detected by checking whether or not to do so.
- the detection example of the eye feature which is one of the tertiary features described above with respect to the processing in step S204
- two types of detection models 0 degrees and 45 degrees, are used at the position.
- An eye feature which is the next feature, is detected.
- the detection model of the 0-degree eye feature is as shown by the detection model 700 shown in FIG. 19A described above. That is, in the right empty V-shaped area 7001 in the detection model 700, (1) the correlation value of the detection result of the 0-degree right empty V-shaped feature 2-1-1 of the secondary feature sets the threshold value.
- the correlation value of other features is relatively low, and (2) Left empty V-shaped feature at 0 degree of secondary feature in left-open V-shaped area 702 2-2-1
- the correlation value of the detection result exceeds the threshold value, the correlation value of the other feature is relatively low, and (3) the horizontal parallel line at 0 degree of the secondary feature in the horizontal parallel region 703 Feature 2-3-1
- the correlation value of the detection result exceeds the threshold value, the correlation values of other features are relatively low, and (4) the secondary features in the vertical parallel region 704
- the above four conditions are satisfied simultaneously when the correlation value of the detection result of the 0-degree vertical parallel line feature 2-4-4-1 exceeds the threshold value and the correlation values of other features are relatively low
- the detection of the 45-degree eye feature is performed using the 45-degree detection result of the secondary feature detected using the secondary detection model for the 45-degree. Then, these detection results are output to the fourth-order feature detection unit 1304 and the fourth-order feature detection 9 Note is output to the model selection unit 1 3 1 4, these processes are performed in tertiary feature detection unit 1 3 0 3 in the pattern detector of FIG 3.
- the fourth-order feature detection model selection unit 1314 selects a fourth-order feature detection model (step S206).
- the selection method here is to select based on the correlation value, as in step S204. For example, suppose that the detection result of the tertiary feature in which 0 degrees and 45 degrees are selected as shown in the description of the processing in step S205 is as shown in FIG. 18B.
- the 45-degree correlation value is 70% or less of the 0-degree correlation value. Therefore, at this time, the detection model for detecting the facial features is the 0-degree face (the code in FIG. 14).
- the detection model for detecting 4-1-1) and 180-degree reverse face detection (reference numeral 4-2-1 in FIG. 14) is selected.
- the fourth-order feature detection unit 1304 detects the fourth-order feature using the fourth-order feature detection model selected in step S206 (step S207).
- the detection method in this case is the same as in steps S203 and S205.
- the face feature which is the fourth-order feature
- the face size detected from the positions of both eyes and the mouth can be detected together with the rotation angle of the face.
- a detection model for detecting each feature is prepared according to the rotation angle, and the next-stage feature is detected according to the result of detection of the preceding feature.
- the detection model to be used for detection is selected. Therefore, regardless of the rotation of each feature, the detection ⁇ PT / JP2003 / 016095
- the effect is that the degree of accuracy is improved, and the detection accuracy of the finally detected pattern is improved.
- the shape of the eyes and mouth changes depending on the opening / closing operation and facial expressions.
- the rotation angle of the right open V-shaped feature and the rotation angle of the face may differ.
- the tertiary and quaternary features are not detected only by the rotation angle at which the correlation value of the secondary feature is maximized.
- the next stage is performed based on the correlation value.
- FIG. 20 is a block diagram showing a configuration of an imaging device using the pattern detection device according to the fourth embodiment.
- An imaging device 2001 shown in FIG. 20 includes an imaging optical system 2002 including an imaging lens and a drive control mechanism for zoom imaging, a CCD or CMOS image sensor 2003, an imaging parameter. Evening measurement section 204, video signal processing circuit 200, storage section 200, control signal generation section 200 that generates control signals for controlling the imaging operation, controlling the imaging conditions, etc. 7.Equipped with a display display 200, which also serves as a finder such as an EVF (Electronic View Finder), a strobe light emitting unit 209, a recording medium 210, etc. Provided as output device 201.
- EVF Electronic View Finder
- the imaging device 2001 for example, detection of a face image of a person (that is, detection of the position, size, and rotation angle) of a captured video is performed by a subject detection (recognition) device 201. Perform by 1. Then, when the detected position information of the person is input from the subject detection (recognition) device 201 to the control signal generation unit 207, the control signal generation unit 207 determines the imaging parameter. Based on the output from the measuring section 204, a control signal for optimally performing focus control, exposure condition control, white balance control, and the like for the person is generated.
- the imaging device 2001 including the above-described pattern detection device as the object detection (recognition) device 201 has been described, but the algorithm of the above-described pattern detection device is naturally used as a program. It is also possible to mount and operate on a CPU and mount it on the imaging device 20001.
- the features of the pattern to be detected are divided into four layers, the primary features are detected in order from the fourth feature, and the pattern to be detected is confirmed at the end.
- the number of layers may be three or less, or five or more. This is the same in the second embodiment and a sixth embodiment described later.
- FIG. 21 is a block diagram showing the configuration of the pattern detection device according to the fifth embodiment of the present invention.
- 2100 is a signal input section
- 2101 is a primary feature detection section
- 2111 is a primary feature detection filter setting section
- 2102 is a secondary feature detection section
- 2 1 1 2 is the secondary feature detection model setting section
- 2 103 is the tertiary feature detection section
- 2 1 1 3 is the tertiary feature detection model selection section
- 2 1 2 3 is the tertiary feature detection model holding section
- 2 1 3 3 is the secondary feature measurement unit
- 210 4 is the 4th feature detection unit
- 2114 is the 4th feature detection model selection unit
- 2 1 2 4 is the 4th feature detection model holding unit
- 2 1 3 4 indicates a tertiary feature measuring unit.
- the parts different from the above-described fourth embodiment are basically a secondary feature measuring unit 2 13 3 and a tertiary feature measuring unit 2 13 4, and a tertiary feature detection model selection unit 2 1 1 3, and a quaternary feature detection model selection unit 2 1 1 4.
- the tertiary feature detection model selector 1313 based on the output value of the secondary feature detector 1302, operates as the entire pattern detection device. Therefore, the detection model used for detecting the tertiary feature was selected. Further, the quaternary feature detection model selection unit 1314 has selected a detection model to be used when detecting the quaternary feature based on the output value of the tertiary feature detection unit 1303. On the other hand, in the present embodiment, the tertiary feature detection model selection unit 211 selects a detection model to be used when detecting a tertiary feature based on the output of the secondary feature measurement unit 213. Is different. Similarly, the quaternary feature detection model selection unit 211 is different in that it selects a detection model that detects a quaternary feature based on the output value of the tertiary feature measurement unit 213.
- the secondary feature measuring unit 2 1 3 3 measures the rotation angle of the secondary feature based on the output of the secondary feature detecting unit 2 102.
- the tertiary feature measuring unit 213 measures the rotation angle of the tertiary feature based on the output of the tertiary feature detecting unit 210.
- 0 i indicates each angle
- S i indicates a correlation value of the angle.
- all angles calculated by the secondary feature detection unit 2102 may be used.
- an angle greater than the threshold value may be used, or the angle may be selected based on a percentage of the maximum correlation value.
- the second-order feature measurement unit 2 1 3 3 (or the 3rd-order feature measurement unit 2 1 3 4) also calculates the top two angles of the correlation value among the angles used to calculate the angle. Output.
- the expression (1) estimates the rotation angle ⁇ a of the secondary or tertiary feature from the results of detection at discrete angles, and in the present embodiment, in particular, is limited to only this calculation expression There is no problem, and other calculation formulas may be used.
- the operation of the tertiary feature detection model selection unit 2113 and the quaternary feature detection model selection unit 2114 will be described. Since the operations of these two feature detection model selection units are basically the same, only the tertiary feature detection model selection unit 211 will be described below as an example.
- FIG. 22 is a flowchart for explaining the operation of the tertiary feature detection model selection unit 2113 of the pattern detection device according to the fifth embodiment.
- the tertiary feature detection model selection unit 2113 determines whether there is no input of ⁇ c (step S1002). As a result, if there is no input of ⁇ c and only 0b is input (Yes), a detection model for detecting the tertiary feature of the rotation angle 0b is selected (step S1003). On the other hand, when two angles 0b0c are input (No) The tertiary feature detection model selection unit 2113 performs a discrimination process on 0a ab, ⁇ c (step S1004). This determination processing is performed based on, for example, the following equation (2). f l Q 2 * 6b + 0c
- the tertiary feature detection model selection unit 2113 calculates the two angles of rotation angle 0 c and (0 b + 0 c) Z2. Then, a detection model for detecting the tertiary feature is selected (Step S1007). On the other hand, if 0a is not within the range shown by equation (3) (No), the tertiary feature detection model selector 2113 detects the tertiary feature using the two angles of rotation angles 0b and 0c. Is selected (step S1008).
- the tertiary feature detection unit selection unit 2113 uses the tertiary feature detection unit 2103 based on the rotation angle obtained by the secondary feature measurement unit 2133 and the two angles used in the calculation. Select a detection model to detect the next feature. This operation is the same for the fourth-order feature detection model selection unit 2114.
- FIG. 23 is a schematic diagram for explaining a method of selecting a detection model in the fifth embodiment.
- the operation of the flowchart shown in FIG. 22 described above will be described with reference to the schematic diagram of FIG. 23.
- the detection model for detecting the tertiary feature is changed.
- the rotation angle obtained by the secondary feature measurement unit 2133 is In the range of B
- the detection model used by the tertiary feature detection unit 2103 is a detection model rotated by 0 degrees and 45 degrees.
- the detection model is a detection model rotated by 0 degrees and 22.5 degrees. In the range of C, the detection model is rotated by 22.5 degrees and 45 degrees.
- the accuracy of the calculation of the rotation angle in the next stage feature detection is improved by narrowing the interval between the two detections. For this purpose, it is necessary to prepare a detection model for detecting the tertiary feature at a smaller angular interval than the detection model for detecting the secondary feature. Then, it is necessary to prepare a detection model for detecting the fourth-order feature at a more detailed angle.
- FIG. 24 is a diagram illustrating a change in the rotation angle of the detection model in each layer in the fifth embodiment.
- the detection results at two rotation angles sandwiching that rotation angle are used.
- the detection using the detection model includes the secondary feature detection unit 210.
- the 0 ° and 45 ° rotated secondary features detected in 2 are used.
- Equations (2) and (3) above represent the rotation angle 0a of the secondary or tertiary feature measured by the secondary feature measurement unit 2 1 3 3 or tertiary feature measurement unit 2 1 3 4
- the rotation angle Q i when the preceding feature is detected is compared with the rotation angle Q i to determine whether the measured rotation angle ⁇ a is close to a certain rotation angle ⁇ i in the rotation angle used for detection. It is used to determine whether or not it is. Therefore, the present invention is not limited to the above formula, and another determination method may be used.
- a detection model for detecting each feature is prepared with a smaller angle width for higher-order features, and the detection model is used for the feature detection results of the preceding stage. Accordingly, the user is allowed to select a detection model to be used for detecting the next-stage feature. Therefore, the detection accuracy is improved while suppressing an increase in the calculation cost regardless of the rotation of each feature, and the higher the higher-order features, the higher the detection accuracy is obtained.
- FIG. 25 is a block diagram showing the configuration of the pattern detection device according to the sixth embodiment of the present invention.
- reference numeral 2500 denotes a signal input unit
- 2501 denotes a primary feature detection unit
- 2551 1 denotes a primary feature detection filter setting unit
- 2502 denotes a secondary feature detection unit
- 2 5 1 2 is a secondary feature detection model setting unit
- 250 3 is a tertiary feature detection unit
- 2 5 1 3 is a tertiary feature detection model setting unit
- 2 5 2 3 is a tertiary feature reference model holding unit
- 2 5 3 3 is the secondary feature measurement unit
- 250 4 is the 4 th feature detection unit
- 2 5 14 is the 4 th feature detection model ⁇ setting unit
- 2 5 2 4 is the 4 th feature reference model holding unit
- 2 5 3 4 denotes a tertiary feature measuring unit.
- parts different from the fifth embodiment is basically tertiary feature detection model setting unit 2 5 1 3, 4-order feature detection model; Le setting unit 2 5 1 4, 3rd feature reference model holding unit 2 5 2 3, and 4th feature reference model holding unit 2 5 2 4.
- the tertiary feature detection model selection unit 211 is used to detect a tertiary feature based on the output of the secondary feature measurement unit 213.
- the detection model to be used was selected from the tertiary feature detection model holding unit 2 1 2 3.
- the quaternary feature detection model selection unit 2 1 1 4 holds the detection model used when detecting the quaternary feature based on the output of the tertiary feature measurement unit 2 1 3 4. Part 2 1 2 4 was selected from.
- the tertiary feature detection model setting unit 2513 detects the tertiary feature based on the output of the secondary feature measurement unit 2533
- the detection model to be used is set from the reference model stored in the tertiary feature reference model storage unit 252 3.
- the fourth-order feature detection model setting unit 2514 sets the detection model used when detecting the fourth-order feature based on the output of the third-order feature measurement unit 2534 in the fourth-order feature reference model holding unit 2. It differs in that it is set from the reference model stored in 524.
- the operation of the third-order feature detection model setting unit 2513 and the fourth-order feature detection model setting unit 2514 will be described. Since the operations of these two feature detection model setting units are basically the same, a description will be given below using the tertiary feature detection model setting unit 25 13 as an example.
- the tertiary feature detection model setting unit 2 5 13 sets the output of the secondary feature measurement unit 2 13 3 as a parameter, and calculates 0 d using the following equation (4).
- Equation (4) 0 i is each angle, S i is a correlation value of the angle, and 0 a is a rotation angle according to the equation (1) described in the fifth embodiment.
- ⁇ e is found using equation (5).
- FIG. 26 is a diagram showing an outline of two rotation angles 0 f and 0 a and soil 0 f in the sixth embodiment.
- the detection model is created by rotating and converting the reference model held in the tertiary feature reference model holding unit 2 5 2 3 at the obtained rotation angle 0a soil 0f. This operation is the same for the fourth-order feature detection model selection unit 211.
- the above equation (4) is used to calculate the rotation angle interval of the detection model of the tertiary feature or the quaternary feature from the result obtained by detecting the rotation angle 0a and the discrete angles. However, if the calculation angle of equation (4) becomes too small, the detection accuracy deteriorates. Therefore, in the present embodiment, the above equation (5) is calculated. When the angle is set, the equation (4) or (5) The one with the larger calculation angle is selected.
- the setting of the detection model in the present embodiment is not limited to the above method. For example, if an appropriate rotation angle interval of the detection model of the tertiary feature or the quaternary feature can be set, other methods are used. May be used.
- the tertiary feature detection unit 2503 uses the detection model rotated at the two rotation angles to determine the tertiary feature (or the quaternary feature). Perform detection. If there is no previous detection result corresponding to the rotation angle of the selected detection model, the detection results at two rotation angles sandwiching the rotation angle are used, as in the fifth embodiment. , Or the detection result at the closest rotation angle is used.
- the detection model for detecting each feature is used to determine the angle of the detection result to be used for detecting the next-stage feature based on the detection result of the preceding stage. It was set so that it was always sandwiched. At that time, the angle of the pinch was adjusted based on the detection result value. Therefore, detection accuracy is improved while suppressing an increase in calculation cost regardless of the rotation of each feature.
- pattern detection (recognition) device and the processing means on which the pattern detection method described in the present embodiment operates can also be mounted on the imaging device, as in the fourth and fifth embodiments.
- FIG. 27 shows a functional configuration of the pattern recognition device of the seventh embodiment.
- the pattern detection device according to the present embodiment is applicable to an imaging device or the like.
- a plurality of features constituting the recognition target from the target image are hierarchically arranged.
- a configuration in which a plurality of reference data for detection is stored, and based on the reference data, data for detecting the target feature is set using the parameters obtained from the detection results of the preceding features.
- the pattern detection device includes a signal input section 270, a primary feature detection section 2701, a primary feature detection filter setting section 271, a secondary Feature detector 2 7 0 2, 2nd feature detection model setting unit 2 7 1 2, 2nd feature reference model holding unit 2 7 2 2, 3rd feature detection unit 2 7 0 3, 3rd feature detection model setting unit 2 7 1 3, 3rd feature reference model holding unit 2 7 2 3, 4th feature detection unit 2704, 4th feature detection model setting unit 2 7 1 4, 4th feature reference model holding unit 2 7 2 4 It has a pattern confirmation section 275, a confirmation pattern setting section 275, and a reference confirmation pattern holding section 272.
- the signal input unit 270 00 is a signal (such as an image signal or an audio signal) to be processed. Here, the signal of the target image) is input.
- the primary feature detector 2701 performs a process for detecting a primary feature on the signal input from the signal input unit 2700, and the processing result (the primary feature detection result ) Is supplied to the secondary feature detection unit 270 2, and the primary feature detection result and its parameter are supplied to the secondary feature detection model setting unit 271 12.
- the primary feature detection filter setting section 271 1 1 sets the filter characteristic or parameter for detecting the primary feature in the primary feature detection section 270 1.
- the secondary feature detection unit 2702 is configured to use the detection model set by the secondary feature detection model setting unit 2712 for the primary feature detection result from the primary feature detection unit 2701. Is used to perform processing for detecting secondary features, and the processing result (secondary feature detection result) is supplied to the tertiary feature detection unit 2703, and the secondary feature detection result and its parameters are Supplied to tertiary feature detection model setting unit 2 7 1 3.
- the secondary feature detection model setting unit 2712 sets a model that indicates the positional relationship of each primary feature used when the secondary feature is detected by the secondary feature detection unit 2702. It is set using the reference model stored in the secondary feature reference model storage unit 272, the primary feature detection result from the primary feature detection unit 2701, and its parameters.
- the secondary feature reference model holding unit 2 7 2 2 holds the reference model of the detection model set by the secondary feature detection model setting unit 2 7 1 2.
- the tertiary feature detection unit 270 3 uses the detection model set by the tertiary feature detection model setting unit 271 3 in response to the secondary feature detection result from the secondary feature detection unit 270 2.
- the tertiary feature detection process is performed using, the processing result (tertiary feature detection result) is supplied to the quaternary feature detection unit 2704, and the tertiary feature detection result and its parameters are 4th feature detection model setting section 2 7 1 4 Supply.
- the tertiary feature detection model setting unit 271 13 sets the model indicating the positional relationship between the secondary features used when the tertiary feature detection unit 2703 detects the tertiary features. It is set using the reference model held in the next feature reference model holding unit 272, the secondary feature detection result from the secondary feature detection unit 2702, and its parameters.
- the tertiary feature reference model holding unit 272 3 holds a reference model of the detection model set by the tertiary feature detection model setting unit 271 13.
- the quaternary feature detection unit 270 4 detects the tertiary feature detection result from the tertiary feature detection unit 270 3 and sets the detection model set by the quaternary feature detection model setting unit 274 Is used to detect the fourth-order feature, and the processing result (fourth-order feature detection result) is supplied to the pattern checking unit 27005, and the fourth-order feature detection result and its parameters are checked. Supplied to the pattern setting section 27 15.
- the quaternary feature detection model setting unit 2 714 sets the model indicating the positional relationship of the tertiary features used when the quaternary feature is detected by the quaternary feature detection unit 2704. It is set using the reference model held in the next feature reference model holding unit 274, the tertiary feature detection result from the tertiary feature detection unit 2703, and its parameters.
- the fourth-order feature reference model holding unit 2724 holds the reference model of the detection model set by the fourth-order feature detection model setting unit 2714.
- the pattern checking unit 275 5 checks whether or not the signal input by the signal input unit 270 0 contains the check pattern set by the check pattern setting unit 275 5.
- the check pattern setting section 2715 stores the reference pattern held in the reference check pattern holding section 2724, the fourth-order feature detection result from the fourth-order feature detection section 2704, and its parameters. Use the pattern confirmation section 2 7 0 5 Set the pattern.
- the reference check pattern holding section 2725 holds the reference pattern of the check pattern set in the check pattern setting section 2715.
- FIG. 28 is a flowchart showing the operation of the pattern recognition device 100.
- the signal input unit 2700 inputs an image signal as a signal to be processed (step S2801).
- the primary feature detection unit 27001 is composed of, for example, an image signal input by the signal input unit 270 00 using the filter set by the primary feature detection filter setting unit 271 001. A primary feature is detected at each position of the image (target image) (step S2802).
- the target image has a large vertical feature (1-1-1) and a large horizontal feature (1-12-1).
- each feature is output as a detection result image of the same size as the target image.
- a plurality of filters used in the primary feature detection unit 27001 may be prepared from the beginning, or a primary feature detection filter setting unit 275 may be used with the direction and size as parameters. It may be created in 1 1.
- the secondary features detected in the processing described below are: The right-open V-shaped feature (2-1), the left-open V-shaped feature (2 2), the horizontal parallel line feature (2-13), and the vertical parallel line feature (2-4).
- the feature (3-1) and the mouth feature (3-2) are assumed, and the quaternary feature is the face feature (4-1).
- the secondary feature detection model setting unit 2712 sets a model for detecting a secondary feature in the secondary feature detection unit 2720 (step S2803).
- the setting of a detection model for detecting the right empty V-shaped feature (2-1) shown in FIG. 14 will be considered as an example.
- the right-open V-shaped feature (2-1) has a primary feature, a right-up diagonal feature at the top, and a right-down diagonal feature at the bottom. That is, in order to detect a right empty V-shaped feature, using the result of the primary feature detection obtained in step S2802, there is an upward-sloping oblique feature at the top, and a downward-sloping oblique feature at the bottom. It suffices to find the position where the feature exists, and the right empty V-shaped feature (2-1) exists at that position.
- the size of the face present in the target image is not a fixed size, and the size of the eyes and mouth varies depending on the individual. It also changes.
- a right empty V-shaped detection reference model 400 as shown in FIG. 16B is used.
- reference numeral 4003 denotes a diagonally upward-sloping region
- reference numeral 404 denotes a diagonally downward-slant region.
- the primary features obtained in step S2802 only the large right-up diagonal feature or the small right-up diagonal feature among the primary features obtained in step S2802, and the right-down diagonal region 4
- the right-down diagonal region 4 In contrast to the case of 04, if there is only a large right-down diagonal feature or only a small right-down diagonal feature, it is assumed that there is a right empty V-order feature (2-1) at that position.
- FIG. 29A and 29B in order to detect V-shaped right-features having significantly different sizes, the same V-shaped detection reference model 400 Is difficult.
- FIG. 29A and 29B in order to detect the right empty V-shaped features having considerably different sizes as shown in FIGS. 29A and 29B using the same V-shaped reference model 400, for example, FIG.
- the size of the large and small size can be set by setting the right empty V-shape detection reference model 400 shown in B to be very large, and as a result, making the upward-sloping oblique area 400 and the downward-sloping oblique area 400 very wide. It is possible to detect the right empty V-shaped feature with different.
- both the right-up diagonal feature and the right-down diagonal feature are one component of the right-open V-shaped feature, and their sizes are almost the same. If it is nearby and the size of the right empty V-shaped feature is large, the size of the right-up diagonal feature and the size of the right-down diagonal feature also become large.
- the size of the reference model for detecting the secondary feature is set to be suitable for the size of the primary feature detected in step S2802.
- the primary features are detected with a small size filter, and the target image is detected as shown in Fig. 29B.
- the primary features are detected with a large-size filter, and as described above, the primary features are also detected for the size of the model for detecting the right-open V-shaped feature, which is the secondary feature. Varies depending on the size of the fill.
- the size of the filter that has detected the primary feature is set as a parameter, and the model for detecting each secondary feature is enlarged or reduced, and each secondary feature is detected. Set the model for secondary feature detection for feature detection.
- Figure 29C shows a model for detecting a right empty V-shape when the face size is small
- Figure 29D shows a model for detecting a right empty V-shape when the face size is large. .
- These models are obtained by changing the size of the right empty Vth detection reference model 400 shown in FIG. 16B at different magnifications.
- multiple sizes of filters are prepared to detect the primary features
- multiple processing channels are prepared according to the size
- the method of detecting in each processing channel is effective.
- the above-described problem is solved by changing the size of the detection model according to the detection result of the previous hierarchy.
- Each feature as shown in FIG. 14 can be detected by a combination of the features detected in the previous step processing.
- the left empty V-shaped feature can be detected from the diagonal down right and the diagonal up right, and the horizontal parallel line feature can be detected from the horizontal feature.
- vertical parallel features can be detected from vertical features.
- the eye features can be detected from the right empty V-shaped feature, the left empty V-shaped feature, the horizontal parallel line feature, and the vertical parallel line feature.
- the left open V-shaped feature and the horizontal parallel line feature can be detected, and the quaternary feature can be detected from the eye feature and the mouth feature.
- the secondary feature detection unit 2702 detects the secondary feature of the target image using the secondary feature detection model set in step S2803 (step S2804). Specifically, for example, first, the detection of the secondary feature is performed using the value of each primary feature constituting the secondary feature. For example, when the value of each primary feature is equal to or larger than an arbitrary threshold value, It is determined by whether or not there is.
- the maximum value of each right-up diagonal feature in the right-up diagonal area is detected. Is higher than the threshold value and the maximum value of each of the downward-sloping oblique features present in the downward-sloping oblique region is higher than the threshold value, it is assumed that there is a right empty V-shaped feature at that position . Then, the value at that position is taken as the average of those maximum values. Conversely, if the value of each primary feature is lower than the threshold value, there is no secondary feature at that position, and the value at that position is set to "0".
- the secondary feature detection result obtained as described above is output in the form of a detection result image of the same size as the target image for each secondary feature. That is, if the secondary features are as shown in FIG. 14 above, images of four types of secondary feature detection results can be obtained. By referring to the value of each position in these detection result images, it can be determined whether or not each secondary feature exists at the corresponding position in the target image.
- the primary feature is not detected in each region of the secondary feature detection model.
- step S2804 it is determined whether or not each primary feature exists in these regions by using a threshold. We judge by using it.
- processing is performed to determine that a secondary feature exists at the position.
- the processing method for such feature detection is the same for the following tertiary features and quaternary features.
- step S2804 parameters used for setting the next tertiary feature detection model are obtained. For example, as shown in Fig. 30, the distance between the point indicating the maximum value of the upward-sloping diagonal feature and the point indicating the maximum value of the downward-sloping diagonal feature at the same time as the detection of the right empty V-shaped feature. Ask for it in the evening. Then, this parameter is output together with each secondary feature detection result.
- the tertiary feature detection model setting unit 271 13 uses the tertiary feature criterion to determine the model used to detect the tertiary features in the tertiary feature detection unit 270
- the setting is performed using the reference model stored in the model storage unit 272, the secondary feature detection result from the secondary feature detection unit 27, 02, and its parameter (step S2805).
- FIG. 19A shows an example of an eye detection reference model 700 for detecting an eye.
- the right empty V-shaped area 701 in which the right empty V-shaped feature (see (2-1) in Fig. 14), which is the secondary feature, exists on the left and the left empty
- the left V-shaped area 7002 where the V-shaped feature (see (2-2) in Fig. 14) exists is on the right side, and the horizontal parallel line feature (see (2-3) in Fig. 14) exists
- the horizontal parallel line region 703 and the vertical parallel line region 704 with the vertical parallel line feature (see (2-4) in Fig. 14) are present between these V-shaped features. are doing.
- this reference model is scaled up or down to obtain a tertiary feature suitable for detecting tertiary features.
- the parameter used in step S2804 is used to enlarge or reduce the reference model.
- the distance between the position indicating the maximum value of the right-up diagonal feature and the maximum value of the right-down diagonal feature obtained when detecting a right free V-shaped edge depends on the size of the eye. Therefore, with this distance as a parameter, an eye feature detection model is set based on the reference model of the eye.
- a detection model corresponding to each position is set using the parameters of the secondary feature. That is, for example, as shown in FIG. 31A, when faces having different sizes (that is, different eye sizes) are present in the target image, the right empty V-shaped feature, which is the secondary feature, as described above. With the size of the parameter set as a parameter, an eye feature detection model suitable for each position is set as shown in Figure 31B.
- the eye feature detection model 800 1 has the size calculated from the parameter value of the secondary feature at that position, and the size of the secondary feature at the position of the eye feature detection model 800 2 This conceptually shows that the size is determined from the parameter value.
- the tertiary feature detection unit 2703 detects a tertiary feature using the tertiary feature detection model set in step S2805 (step S2806).
- the method of detecting each tertiary feature here is the same as that in step S2804, and therefore detailed description thereof is omitted.
- the parameters for example, in the case of eye detection, the distance (the distance corresponding to the width of the eye) between the right empty V-shaped feature and the left empty V-shaped feature that shows the maximum value is determined, and this is used as a parameter. .
- the 4th-order feature detection model setting unit 2 7 1 4 uses the 4th-order feature detection unit 2 7 0 4
- the model indicating the positional relationship of each tertiary feature used when detecting the feature is obtained from the reference model stored in the quaternary feature reference model storage unit 274 and the tertiary feature detection unit 270 3 It is set by using the tertiary feature detection result and its parameters (step S2807).
- a face feature detection model is set based on the reference model of the face using the parameter indicating the width of the eye obtained in step S2806.
- the fourth-order feature detection unit 27004 detects a fourth-order feature using the fourth-order feature detection model set in step S2807 (step S2808). Since the detection method here is the same as that in steps S2804 and S206, detailed description thereof is omitted. Also, regarding the parameters, for example, in the case of detecting facial features, the positions of both eyes and mouth are set as parameters. This parameter is used in the next step S2809.
- the check pattern setting section 2715 stores the reference pattern held in the reference check pattern holding section 2724, the fourth-order feature detection result from the fourth-order feature detection section 2704, and its parameters. Use this to set the confirmation pattern to be used in the pattern confirmation section 275 (Step S2809).
- a quaternary feature is detected in the processing of steps S2801 to S2808, and a plurality of tertiary features constituting the quaternary feature in the background in the target image are obtained. If there are regions similar to, and their positional relationships are similar, erroneous detection may occur in the fourth-order feature detection.
- a general reference pattern to be detected is prepared, and the size and shape of the pattern are corrected based on the parameters obtained in step S288 to obtain a confirmation pattern.
- this confirmation pattern it is determined whether or not a pattern to be finally detected exists in the target image.
- the face is used as the detection pattern, a general reference pattern of the face is prepared, and by correcting this reference pattern, a face confirmation pattern is obtained, and this face confirmation pattern is used. It is determined whether the face pattern exists in the target image.
- a confirmation pattern is set using the parameters obtained in step S2808 based on the reference pattern. That is, in setting the face pattern, the face confirmation pattern is set based on the reference pattern of the face and using the parameters indicating the positions of the eyes and the mouth obtained in step S2806. I do.
- Figures 32A and 32B show an example of the confirmation pattern.
- Fig. 32A shows a face reference pattern.This face reference pattern is obtained, for example, by preparing a plurality of faces, normalizing their sizes, and averaging the luminance values. It is.
- step S 2808 for the face reference pattern of FIG. 32A that is, the positions of both eyes and the position of the mouth, as shown in FIG. 32B.
- Perform size and rotation conversion Specifically, for example, the size is converted using the distance between the eyes and the distance between the midpoint and the mouth between the eyes, and the rotation conversion is performed using the inclination between the eyes. Use to set the face confirmation pattern.
- the method of setting the check pattern is not limited to the method described above.
- a plurality of reference patterns having different sizes and rotation amounts are prepared, and one of these reference patterns is set as one of the reference patterns.
- the selection may be made by using the parameters in step S2806.
- a plurality of reference patterns may be combined and set by a morphing technique or the like using parameters.
- the pattern confirmation unit 2705 obtains a detection pattern from the target image using the confirmation pattern set in step S2809 (step S2801).
- the confirmation pattern obtained in step S2809 and the corresponding position in the target image The correlation with the area is calculated, and if the value exceeds an arbitrary threshold value, it is assumed that the detection pattern exists at that position.
- a reference model for detecting each feature is prepared, and a detection model is set based on the reference model using parameters obtained from the result of feature detection in the preceding stage.
- the detection accuracy of each feature is improved, and the detection accuracy of the finally detected pattern is improved.
- the final confirmation process when looking at the correlation with the average pattern, the average pattern is deformed, such as rotation and size change, according to the position of each feature found so far. The effect is that the confirmation accuracy is improved.
- the function of the pattern recognition (detection) device shown in FIG. 27 on an imaging device as shown in FIG. 20, for example, focusing on a specific subject, color correction of the specific subject, Alternatively, it can be used when performing exposure control. That is, it is possible to detect a person in a video obtained by shooting and perform optimal shooting control based on the detection.
- the feature of the pattern to be detected from the target image is divided into four layers, the primary feature to the fourth feature are sequentially detected, and the pattern to be detected is finally confirmed.
- the present invention is not limited to the four layers, and an arbitrary layer such as three layers or five layers can be applied. This can be similarly implemented in the eighth and ninth embodiments described below.
- a face region is obtained from a target image using a face pattern as a detection pattern.
- the present invention is not limited to only face detection.
- "2" is a secondary feature (upper feature) consisting of a horizontal segment and a diagonally lower right segment, and a vertical segment and a diagonal right segment. It consists of a secondary feature consisting of an upward segment (intermediate feature), and a secondary feature consisting of an obliquely rightward upward segment and a horizontal segment (lower feature).
- the primary feature is detected from the target image
- the secondary feature is detected from the detection result of the primary feature
- “2” as the tertiary feature is detected using the secondary feature detection result. Is detected.
- "4" is detected as a tertiary feature from the secondary feature detection result.
- the present invention is applied to, for example, an information processing device 1200 as shown in FIG.
- the information processing device 1200 of the present embodiment has, in particular, the function of the pattern recognition device 100 shown in FIG.
- the information processing device 1200 includes a control unit 1270, a calculation unit 1210, a weight setting unit 1220, a reference weight holding unit 1230, a parameter—evening detection unit 1240, an input signal memory 1250, and an input signal memory control unit. 1251, an intermediate result memory 1260, and an intermediate result memory control unit 1261. ,
- the control unit 1270 controls the operation of the entire information processing apparatus.
- the control unit 1270 includes a calculation unit 1210, a weight setting unit 1220, a reference weight holding unit 1230, a parameter detection unit 1240, an input signal memory control unit 1251, and an intermediate result memory control unit 1261.
- the pattern recognition operation is performed.
- the arithmetic unit 1 210 uses these data from the input signal memory 1 250 or the intermediate result memory 1 260 and the weight data from the weight setting unit 1 220 to obtain these values.
- a non-linear operation such as a product-sum operation and a mouth function is performed, and the result is stored in the intermediate result memory 126.
- the weight setting section 122 sets weight data using the parameters from the parameter detection section 124 based on the reference weight data from the reference weight holding section 122, and sets the weight.
- the data is supplied to the operation unit 1 210.
- the reference weight storage unit 1 230 holds reference weight data, which serves as a reference for detecting each feature in the input signal, for each feature. Feed to part 122.
- the parameter overnight detector 1240 detects the parameter used when setting the weight data in the weight setting unit 1220 using the data of the intermediate result memory 1260, and detects the parameter in question. One night is supplied to the weight setting unit 122.
- the input signal memory 1250 holds input signals to be processed, such as image signals and audio signals.
- the input signal memory controller 1 2 5 1 uses the input signal stored in the input signal memory 1 2 5 0 when the input signal is stored in the input signal memory 1 2 5 When supplying to 0, it controls the input signal memory 1250. '
- the intermediate result memory 1260 holds the operation result obtained by the operation unit 1210.
- the intermediate result memory control unit 1261 stores the operation result from the operation unit 1210 in the intermediate result memory 1260, and also stores the intermediate result held in the intermediate result memory into the operation unit. Controls the intermediate result memory 1260 when it is supplied to 1210 and the parameter detector 1240.
- an input signal to be processed is an image signal.
- the neural network handles information related to recognition (detection) of an object or a geometric feature in a local region in an input signal in a hierarchical manner, and its basic structure is a so-called convolutional network structure.
- the output from the final layer is the category of the recognized object as the recognition result and the position information on the input data.
- 0 1 is a layer for inputting local area data from a photoelectric conversion element such as a CMOS sensor or a CCD element, etc.
- the first feature detection layer 3502 (1,0) is a data input layer 3501 Local low-order features of the image pattern input from In addition to geometric features such as components, features including color component features may be used as the center of each position on the full screen, and each local area (or each point of a predetermined sampling point over the entire screen as the center). In a local area), only the number of feature categories is detected at the same location at multiple scale levels or resolutions.
- the feature integration layer 3503 (2, 0) is a predetermined receptive field structure (hereinafter, "receptive field” means the range of coupling with the output element of the immediately preceding layer, and “receptive field structure” And the integration of multiple neuron element outputs within the same reception field from the feature detection layer 3502 (1, 0) (meaning local averaging and maximum output detection). Integration by calculation such as sub-sampling).
- each receptive field of neurons in the integration layer has a common structure among neurons in the same layer.
- each receptive field of a neuron in the feature detection layer also has a common structure among neurons in the same layer.
- the gist of the present embodiment is that the change is made according to the output result (detection result) of the neuron in the preceding stage.
- feature detection layers 3502 ((1, 1), (1, 2),..., ( ⁇ , ⁇ )) and feature integration layers 3503 ((2, 1), (2, 2) ),..., (2, ⁇ )) are the same as the layers described above, and the former ((1, 1),%) Detects a plurality of different features in each feature detection module, and the latter ( (2, 1), 7) performs detection result integration for multiple features from the preceding feature detection layer 9
- the former feature detection layer is connected (wired) to receive the cell element output of the preceding feature integration layer belonging to the same channel.
- Subsampling which is a process performed in the feature integration layer, averages the output from local regions (local receptive fields of the feature integration layer neurons) from the feature detection cell population of the same feature category. is there.
- FIG. 36 is a flowchart showing, as a specific example of the operation of the information processing apparatus, the operation in the case of recognizing a face pattern from a target image, as in the seventh embodiment.
- the input signal memory control unit 1251 inputs the signal (here, the image signal) input by the control unit 1270 to the input signal memory 1250 (step S1401).
- This step S1401 corresponds to the processing by the data input layer 3501 shown in FIG.
- the weight setting unit 1220 includes, for example, the detection weight data of the primary feature as shown in FIG. 14 (weight data for performing edge extraction in each direction and each size) held in the reference weight holding unit 1230 Is set for the calculation unit 1210 (step S1402).
- the size and direction may be used as parameters, and the primary feature detection weight data may be generated by the weight setting unit 1220.
- Arithmetic unit 1210 detects a primary feature (step S1403). That is, the primary feature detection in this step S 1403 corresponds to the processing of the feature detection layer 3502 (1, 0) shown in FIG. 35 above, and the calculation unit 1210 detects the feature f Executes processing equivalent to module 3504.
- each primary feature detection weight value set in step S1402 corresponds to the structure of the receptive field 3505 for detecting each feature f
- the arithmetic unit 1210 reads the image from the input image memory 1250
- the signal is acquired, and a product-sum operation of the local region (region corresponding to the receptive field 3505) at each position of the image signal and each primary feature detection weight data is executed.
- an example of the input / output characteristic of the feature detection layer neuron executed by the arithmetic processing unit 1210 is represented by the following equation (6). That is, the output u SL (n, k) of the neuron at position n on the cell surface for detecting the k-th feature in the L-th stage is
- u SL (n, k) f ( t w ⁇ vK ⁇ -u ⁇ n + v, ⁇ ))
- u CL ( ⁇ , ⁇ ) indicates the output of the neuron at position n on the ⁇ th cell surface of the L-th feature integration layer.
- K CIj indicates the number of types of the feature integration layer at the Lth stage.
- wL (V, ⁇ , k) is the ⁇ th position of the neuron at position n on the kth cell surface of the Lth feature detection cell layer, This is the input connection from the neuron at position n + v on the cell surface.
- WL is a receptive field of a detection cell, and its size is finite.
- step S1403 Since the processing in step S1403 is the primary feature detection, L is "1" Since, therefore, corresponds to the data input layer, the number of features in the first stage is one. Since eight types of features are detected, eight types of results can be obtained.
- f 0 indicates nonlinear processing on the result of the product-sum operation. For example, this nonlinear processing
- the result of the non-linear processing is held in the intermediate result memory 1260.
- the weight setting unit 122 sets the primary feature integrated weight data held in the reference weight holding unit 122 to the arithmetic unit 122 (step S144).
- the primary feature integration weight data is weight data for performing processing such as local averaging of the primary features detected in step S1403 and detection of the maximum value.
- the arithmetic unit 1210 generates the primary feature detection result stored in the intermediate result memory 1260 and the primary feature integration weight data set in step S 14 04. (Step S1405) to perform the product-sum operation (integration processing of the detection results of each primary feature).
- the processing in this step S1405 corresponds to the processing of the feature integration layer 3503 (2, 0) shown in FIG. 35, and is a processing corresponding to the integration module of each feature f.
- the integration of multiple neuron element outputs that exist in the same receptive field from the feature detection layer 3502 (1, 0) (operations such as local averaging, subsampling by maximum output detection, etc.) Is equivalent to
- the arithmetic unit 1 210 executes processing such as averaging and maximum value detection in a local region for each primary feature detection result.
- processing such as averaging and maximum value detection in a local region for each primary feature detection result.
- the operation unit 1 210 JP2003 / 016095
- (V) is the input coupling from the neuron of the L-th feature detection layer to the neuron existing on the cell surface of the L-th feature integration cell layer.
- IVI is simply a decreasing function. Indicates the receptive field of the integrated cell, and its size is finite.
- the arithmetic unit 1210 holds the result of the product-sum operation by the above equation (8) in the intermediate result memory 1260. At this time, the arithmetic unit 1210 may further perform a non-linear process on the result of the product-sum operation, and may store the result in the intermediate result memory 1260.
- the intermediate result memory 1260 stores the primary feature detection results obtained by integrating the primary feature detection results in the local region for each feature, and the primary feature integration results in each size and each direction. You are holding it.
- this is the weight data for detecting each secondary feature shown in Fig.14.
- the weight setting unit 122 0 sets feature detection weight data depending on the size of the feature detected in the previous hierarchy when detecting each feature after the secondary feature.
- the weight setting unit 122 0 is a pre-set receptive field indicated by the primary feature detection weight data which has detected each primary feature by the parameter detection unit 124. Set the size as a parameter. Then, the weight setting unit 122 0 converts the reference secondary feature detection weight data held in the reference weight holding unit 1 230 into the parameter detection unit 1 240 Correction is performed using the parameters set in (2), and this result is used as secondary feature detection weight data.
- the operation unit 1 210 performs detection of a secondary feature. This corresponds to the processing of the feature detection layer 3502 (1, 1) shown in FIG. 35 (step S1407).
- the processing itself in step S1407 is the same as the primary characteristic detection processing in step S1403.
- the arithmetic unit 1 210 executes the product-sum operation using the above equation (1) and the processing of the non-linear operation on the result.
- the arithmetic unit 1210 performs the integration of the secondary feature detection weight data set in step S 14 06 and the primary feature integration result held in the intermediate result memory 1260, Used for multiply-accumulate operation, non-linear operation is performed on the operation result, and the operation result (secondary feature detection result) is stored in the intermediate result memory 1260.
- the weight setting unit 122 sets the secondary feature integrated weight data held in the reference weight holding unit 123 to the calculation unit 122.
- the secondary feature integration weight data here is a weight data for executing processing such as local averaging of the secondary feature result detected in step S 1407 and detection of the maximum value. (Step S1408).
- the calculation unit 1 210 integrates the detection results of each secondary feature. This corresponds to the processing of the feature integration layer 133 (2, 1) shown in FIG. 13 above (step S1409).
- the arithmetic unit 1 210 is stored in the intermediate result memory 1 260
- the arithmetic unit 1210 may further perform non-linear processing on the result of the product-sum operation, and may hold the processing result in the intermediate result memory 1260.
- the weight setting unit 122 sets the third-order feature detection weight data for the calculation unit 122 (step S144).
- the tertiary feature detection weight data is a weight data for detecting each tertiary feature shown in FIG. 14 described above.
- the weight setting unit 122 0 is a parameter overnight detection unit 124 0, and each primary feature detection result and each secondary feature stored in the intermediate result memory 1 260 is stored. From the detection result, set a value based on the size of the secondary feature as a parameter.
- the parameter for example, as described in the first embodiment, in the case of the right empty V-shaped feature, the vertical distance between the upward-sloping oblique feature and the downward-sloping oblique feature can be used.
- the weight setting unit 122 0 obtains the reference tertiary feature detection weight data held in the reference weight holding unit 1 230 with respect to the receptive field size by the parameter-evening detection unit 1 240.
- the parameters are modified using the calculated parameters, and the result is used as the tertiary feature detection weight data.
- the operation unit 1 210 performs tertiary feature detection. This corresponds to the processing of the feature detection layer 3502 (1, 2) shown in FIG. 13 above (step S1411). More specifically, the arithmetic unit 1 210 outputs the tertiary feature detection weight data set in step S1401 and the secondary feature stored in the intermediate result memory 126. The product-sum operation with the integrated result and the non-linear operation on the result are executed, and the operation result (third-order feature detection result) is stored in the intermediate result memory 126.
- the weight setting unit 1 220 sets the tertiary feature integrated weight data held in the reference weight holding unit 1 230 to the calculation unit 1 210 (step S 1 412).
- the tertiary feature integrated weight data here is weight data for performing processing such as local averaging and maximum value detection of the tertiary feature result detected in step S1411.
- the operation unit 1210 integrates the detection results of each tertiary feature. This corresponds to the processing of the feature integration layer 3503 (2, 2) shown in FIG. 13 (step S1413). Specifically, the calculation unit 1210 executes a product-sum operation of the detection result of each tertiary feature held in the intermediate result memory 1260 and each tertiary feature integration weight data set in step S1412 Then, the result of the product-sum operation is stored in the intermediate result memory 1260. At this time, the arithmetic unit 1210 may further perform non-linear processing on the result of the product-sum operation, and may hold the processing result in the intermediate result memory 1260.
- the weight setting unit 1220 sets a fourth-order feature detection weight for the calculation unit 1210 (step S1414).
- the quaternary feature detection weight data here is a weight data for detecting each quaternary feature shown in FIG. 14 as described above.
- the weight setting unit 1220 calculates the size of the tertiary feature from the respective secondary feature detection results and the respective tertiary feature detection results held in the intermediate result memory 1260 by the parameter overnight detector 1240. Set a value based on this as a parameter. As this parameter overnight, for example, as described in the first embodiment, in the case of the eye feature, the horizontal distance between the right empty V-shaped feature and the left empty V-shaped feature can be used.
- the weight setting unit 1220 corrects the reference fourth-order feature detection weight data held in the reference weight holding unit 1230 with respect to the receptive field size using the parameters obtained by the parameter-evening detection unit 1240. This result is used as fourth-order feature detection weight data.
- Arithmetic unit 1210 performs fourth-order feature detection. This corresponds to the processing of the feature detection layer 3502 (1, 3) shown in FIG. 35 (step S1415). Specifically, the arithmetic unit 1210 performs the integration of the quaternary feature detection weight data set in step S1414 with the tertiary feature stored in the intermediate result memory 1260. The product-sum operation of and the non-linear operation on the result are executed, and the operation result (fourth-order feature detection result) is stored in the intermediate result memory 126. The weight setting unit 1 220 sets the fourth-order feature integrated weight data held in the reference weight holding unit 1 230 to the calculation unit 1 210 (step S 1
- the quaternary feature integration weight data is a weight data for performing processing such as local averaging of the quaternary feature results detected in step S1415 and detection of the maximum value. .
- the calculation unit 1 210 integrates the detection results of the fourth-order feature. This corresponds to the processing of the feature integration layer 3503 (2, 3) shown in FIG. 35 above (step S117). Specifically, the arithmetic unit 1 210 includes the fourth-order feature detection result stored in the intermediate result memory 1 260 and the fourth-order feature integration weight data set in step S 1 4 16. Is performed, and the result of the product-sum operation is held in the intermediate result memory 1260. At this time, the arithmetic unit 1210 may further perform non-linear processing on the result of the product-sum operation, and may hold the processing result in the intermediate result memory 1260.
- the arithmetic unit 1 210 sets the pattern confirmation weight data (step S 1 4 18). Specifically, first, the quaternary feature is detected by the processing up to step S14417 described above. However, as described in the first embodiment, the target image (input image) If there is an area similar to a plurality of tertiary features that make up the quaternary feature in the background, and the positional relationship between them is also similar, there is a possibility that the quaternary feature will be erroneously detected. That is, for example, in the case of face detection, if there is an area similar to both eyes and mouth in the background in the input image, and their positional relationships are similar, erroneous detection is performed by detecting face features. there is a possibility.
- reference pattern confirmation weight data for detecting a typical type (size, direction, etc.) in a pattern to be detected is used.
- the weight data is corrected, the corrected pattern check weight data is set, and the pattern to be finally detected is present in the input image using the set pattern check weight data.
- reference face pattern confirmation weight data for detecting a typical face is prepared, corrected, and the corrected face pattern confirmation weight data is set. Then, it is determined whether or not a face pattern exists in the input image by using the set face pattern confirmation weight.
- the arithmetic unit 1 210 is a parameter—evening detection unit 1 240, and each tertiary feature detection result held in the intermediate result memory 1 260 From the results of the quaternary feature detection and at each position of the detected quaternary feature, a value based on the tertiary feature detection result is set as a parameter.
- the parameters for example, as described in the first embodiment, in the case of a facial feature, the positions of an eye feature and a mouth feature can be used.
- the arithmetic unit 1 210 obtains the reference pattern confirmation weight data held in the reference weight holding unit 1 230 with respect to its receptive field size and rotation by the parameter overnight detection unit 1 240.
- the parameters are corrected using the parameters and the correction results are used as the pattern confirmation weight data.
- the operation unit 1210 confirms the detection pattern (step S1419). Specifically, the arithmetic unit 1210 performs the product-sum operation of the check pattern weight data set in step S1418 and the input signal held in the input signal memory 1250, And a non-linear operation on the result is executed, and the operation result is stored in the intermediate result memory 1260. The result held in the intermediate result memory 1260 is the final detection result of the pattern to be detected.
- the reference weight data for detecting each feature is prepared, and the detection is performed based on the reference weight data using the parameters obtained from the detection result of the previous stage. Since the configuration is such that the weight is set at a minimum, the detection accuracy of each feature is improved, and the detection accuracy of the finally detected pattern is improved. There is an effect of doing.
- the arithmetic unit 1210 performs the product-sum operation of the detection weight data or the integrated weight data and the data from the intermediate result memory 1260 or the input signal memory 1250 and the nonlinear conversion of the result. Since the weight data used for the product-sum operation is set each time, the same arithmetic unit 1 210 can be used repeatedly. Furthermore, since the configuration is such that both the input signal and the intermediate result are retained, there is an effect that the final confirmation processing can be easily performed.
- the setting according to the detection result is not performed for the integration weight data used for the integration processing.
- the setting of the receptive field size is not performed. It is also possible to perform
- the integration processing for the quaternary features of steps S14416 and S14417 shown in FIG. 36 can be omitted.
- FIG. 38 shows an information processing apparatus according to the present embodiment. This device has the function of the pattern recognition device shown in FIG.
- this information processing device includes a control unit 16 7
- an input signal memory 1650 an input signal memory control section 1651, an intermediate result memory 1660, and an intermediate result memory control section 1661.
- the information processing apparatus basically has the same function as the information processing apparatus according to the second embodiment (see FIG. 34). It does not have a function equivalent to the setting unit 1 220, and supplies the parameters obtained by the no-lambda detector 1 640 to the intermediate result memory controller 166 1 and the arithmetic unit 161 0. It has been configured to.
- parameters are obtained from the processing result of the previous stage, although the configuration is such that weight data for detecting a feature is set from the parameters, in the present embodiment, the reference weight data held in the reference weight holding means 1630 is directly used as the weight data.
- the size of the previous detection result stored in the intermediate result memory 1660 corresponding to the receptive field is changed using interpolation or the like.
- the information processing apparatus changes the size of the normal receptive field for the input image 1700 as shown in FIG. As a result, the resized local image 1710 is generated, and the product-sum operation of the resized local image 1710 and the reference weight data held in the reference weight holding unit 1630 is executed. I do.
- the secondary feature detection result held in the intermediate result memory 1660 is used.
- the local area of the secondary feature detection result image is resized and used.
- the size of the previous detection result used when detecting the feature is changed and reset using the parameters obtained from the previous detection result. Therefore, it is possible to obtain the effect that the detection accuracy of each feature is improved, and the detection accuracy of the finally detected pattern is improved. Also, the size of the detection result can be changed easily by changing the area to be read from the memory and the interpolation processing.
- the present invention can be applied as a part of a system composed of a plurality of devices (for example, a host computer, an interface device, a reader, a printer, etc.), but can be composed of one device (for example, a copying machine, a facsimile machine). It may be applied to a part of things.
- a host computer for example, a host computer, an interface device, a reader, a printer, etc.
- one device for example, a copying machine, a facsimile machine. It may be applied to a part of things.
- the present invention is not limited to only a method and an apparatus for realizing the above embodiment and a method performed by combining the methods described in the embodiment.
- a program code of software for realizing the above-described embodiment is supplied to a computer (CPU or MPU) in the system or the apparatus, and the computer of the system or apparatus operates the various devices according to the program code to execute the above-described operation. Realization of the embodiment is also included in the scope of the present invention.
- the program code of the software implements the functions of the above-described embodiment, and the program code itself and means for supplying the program code to a computer, specifically, the program A storage medium storing the code is included in the scope of the present invention.
- a storage medium for storing such a program code for example, a floppy (R) disk, hard disk, optical disk, magneto-optical disk, CD-ROM, magnetic tape, nonvolatile memory card, ROM, or the like is used.
- R floppy
- CD-ROM compact disc-read only memory
- magnetic tape magnetic tape
- nonvolatile memory card nonvolatile memory card
- the above-described program code operates on the computer.
- Such a program code is included in the scope of the present invention even when the above-described embodiment is realized in cooperation with another application software or the like.
- the function expansion port is stored in the memory provided in the function expansion unit connected to the computer.
- the present invention also includes a case where a CPU or the like provided in the function storage unit performs a part or all of the actual processing, and the above-described embodiment is realized by the processing. According to the above-described embodiment, it is possible to perform robust identification with respect to input pattern fluctuations, and to perform pattern recognition with a lower processing cost while reducing the possibility of erroneous identification. Become.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2003289116A AU2003289116A1 (en) | 2002-12-16 | 2003-12-16 | Pattern identification method, device thereof, and program thereof |
US10/539,882 US7577297B2 (en) | 2002-12-16 | 2003-12-16 | Pattern identification method, device thereof, and program thereof |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002-364369 | 2002-12-16 | ||
JP2002364369A JP4298283B2 (ja) | 2002-12-16 | 2002-12-16 | パターン認識装置、パターン認識方法、及びプログラム |
JP2003416236A JP4266798B2 (ja) | 2003-12-15 | 2003-12-15 | パターン検出装置及びパターン検出方法 |
JP2003-416236 | 2003-12-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004055735A1 true WO2004055735A1 (ja) | 2004-07-01 |
Family
ID=32599267
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/016095 WO2004055735A1 (ja) | 2002-12-16 | 2003-12-16 | パターン識別方法、その装置及びそのプログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US7577297B2 (ja) |
AU (1) | AU2003289116A1 (ja) |
WO (1) | WO2004055735A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI647660B (zh) * | 2016-12-15 | 2019-01-11 | 歐姆龍股份有限公司 | 條狀區域檢測裝置、條狀區域檢測方法及其程式的記錄媒體 |
CN110751134A (zh) * | 2019-12-23 | 2020-02-04 | 长沙智能驾驶研究院有限公司 | 目标检测方法、存储介质及计算机设备 |
Families Citing this family (83)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8553949B2 (en) | 2004-01-22 | 2013-10-08 | DigitalOptics Corporation Europe Limited | Classification and organization of consumer digital images using workflow, and face detection and recognition |
US8363951B2 (en) | 2007-03-05 | 2013-01-29 | DigitalOptics Corporation Europe Limited | Face recognition training method and apparatus |
EP2955662B1 (en) | 2003-07-18 | 2018-04-04 | Canon Kabushiki Kaisha | Image processing device, imaging device, image processing method |
JP4665764B2 (ja) * | 2004-01-15 | 2011-04-06 | 日本電気株式会社 | パターン識別システム、パターン識別方法、及びパターン識別プログラム |
US7564994B1 (en) * | 2004-01-22 | 2009-07-21 | Fotonation Vision Limited | Classification system for consumer digital images using automatic workflow and face detection and recognition |
JP4532915B2 (ja) * | 2004-01-29 | 2010-08-25 | キヤノン株式会社 | パターン認識用学習方法、パターン認識用学習装置、画像入力装置、コンピュータプログラム、及びコンピュータ読み取り可能な記録媒体 |
CA2600938A1 (en) * | 2004-03-24 | 2005-10-06 | Andre Hoffmann | Identification, verification, and recognition method and system |
JP2005352900A (ja) * | 2004-06-11 | 2005-12-22 | Canon Inc | 情報処理装置、情報処理方法、パターン認識装置、及びパターン認識方法 |
JP4217664B2 (ja) * | 2004-06-28 | 2009-02-04 | キヤノン株式会社 | 画像処理方法、画像処理装置 |
US8233681B2 (en) * | 2004-09-24 | 2012-07-31 | The University Of North Carolina At Chapel Hill | Methods, systems, and computer program products for hierarchical registration between a blood vessel and tissue surface model for a subject and a blood vessel and tissue surface image for the subject |
US7715597B2 (en) | 2004-12-29 | 2010-05-11 | Fotonation Ireland Limited | Method and component for image recognition |
JP2006254229A (ja) * | 2005-03-11 | 2006-09-21 | Fuji Photo Film Co Ltd | 撮像装置、撮像方法及び撮像プログラム |
JP5008269B2 (ja) * | 2005-04-08 | 2012-08-22 | キヤノン株式会社 | 情報処理装置、情報処理方法 |
JP4412552B2 (ja) * | 2005-10-05 | 2010-02-10 | 富士フイルム株式会社 | 画像レイアウト装置および方法並びにプログラム |
JP4910507B2 (ja) * | 2006-06-29 | 2012-04-04 | コニカミノルタホールディングス株式会社 | 顔認証システム及び顔認証方法 |
JP2008021228A (ja) * | 2006-07-14 | 2008-01-31 | Renesas Technology Corp | データ処理装置 |
JP4683228B2 (ja) * | 2006-07-25 | 2011-05-18 | 富士フイルム株式会社 | 画像表示装置、撮影装置、画像表示方法およびプログラム |
EP2050043A2 (en) | 2006-08-02 | 2009-04-22 | Fotonation Vision Limited | Face recognition with combined pca-based datasets |
JP2008059197A (ja) * | 2006-08-30 | 2008-03-13 | Canon Inc | 画像照合装置、画像照合方法、コンピュータプログラム及び記憶媒体 |
US20080201641A1 (en) * | 2007-02-21 | 2008-08-21 | Yiling Xie | Method And The Associated Mechanism For 3-D Simulation Stored-Image Database-Driven Spectacle Frame Fitting Services Over Public Network |
US8331674B2 (en) | 2007-04-06 | 2012-12-11 | International Business Machines Corporation | Rule-based combination of a hierarchy of classifiers for occlusion detection |
US20090022403A1 (en) * | 2007-07-20 | 2009-01-22 | Fujifilm Corporation | Image processing apparatus, image processing method, and computer readable medium |
JP2009086749A (ja) * | 2007-09-27 | 2009-04-23 | Canon Inc | パターン識別手法、識別用パラメータ学習方法、及び装置 |
JP4948379B2 (ja) * | 2007-12-18 | 2012-06-06 | キヤノン株式会社 | パターン識別器生成方法、情報処理装置、プログラム及び記憶媒体 |
JP5055166B2 (ja) * | 2008-02-29 | 2012-10-24 | キヤノン株式会社 | 眼の開閉度判定装置、方法及びプログラム、撮像装置 |
WO2009122760A1 (ja) * | 2008-04-04 | 2009-10-08 | 富士フイルム株式会社 | 画像処理装置、画像処理方法、およびコンピュータ読取可能な媒体 |
US8290240B2 (en) * | 2008-06-11 | 2012-10-16 | Sirona Dental Systems Gmbh | System, apparatus, method, and computer program product for determining spatial characteristics of an object using a camera and a search pattern |
JP4966260B2 (ja) * | 2008-06-25 | 2012-07-04 | キヤノン株式会社 | 画像処理方法および画像処理装置、プログラム並びに、コンピュータ読み取り可能な記憶媒体 |
US8331655B2 (en) * | 2008-06-30 | 2012-12-11 | Canon Kabushiki Kaisha | Learning apparatus for pattern detector, learning method and computer-readable storage medium |
JP5394485B2 (ja) * | 2008-07-03 | 2014-01-22 | エヌイーシー ラボラトリーズ アメリカ インク | 印環細胞検出器及び関連する方法 |
US8560488B2 (en) * | 2008-08-08 | 2013-10-15 | Nec Corporation | Pattern determination devices, methods, and programs |
US8160354B2 (en) * | 2008-12-26 | 2012-04-17 | Five Apes, Inc. | Multi-stage image pattern recognizer |
US8290250B2 (en) * | 2008-12-26 | 2012-10-16 | Five Apes, Inc. | Method and apparatus for creating a pattern recognizer |
US8229209B2 (en) * | 2008-12-26 | 2012-07-24 | Five Apes, Inc. | Neural network based pattern recognizer |
JP5709410B2 (ja) * | 2009-06-16 | 2015-04-30 | キヤノン株式会社 | パターン処理装置及びその方法、プログラム |
JP5538967B2 (ja) | 2009-06-18 | 2014-07-02 | キヤノン株式会社 | 情報処理装置、情報処理方法、プログラム |
JP5336995B2 (ja) * | 2009-10-19 | 2013-11-06 | キヤノン株式会社 | 特徴点位置決め装置、画像認識装置、その処理方法及びプログラム |
JP5554984B2 (ja) * | 2009-12-24 | 2014-07-23 | キヤノン株式会社 | パターン認識方法およびパターン認識装置 |
JP5588165B2 (ja) * | 2009-12-24 | 2014-09-10 | キヤノン株式会社 | 画像処理装置、画像処理方法およびプログラム |
JP5812599B2 (ja) * | 2010-02-25 | 2015-11-17 | キヤノン株式会社 | 情報処理方法及びその装置 |
JP2012038106A (ja) | 2010-08-06 | 2012-02-23 | Canon Inc | 情報処理装置、情報処理方法、およびプログラム |
US8768944B2 (en) | 2010-08-18 | 2014-07-01 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and storage medium |
JP5675214B2 (ja) | 2010-08-18 | 2015-02-25 | キヤノン株式会社 | 情報処理装置、情報処理方法およびプログラム |
US8879804B1 (en) * | 2010-12-18 | 2014-11-04 | Alexey Konoplev | System and method for automatic detection and recognition of facial features |
JP5746550B2 (ja) * | 2011-04-25 | 2015-07-08 | キヤノン株式会社 | 画像処理装置、画像処理方法 |
JP5848551B2 (ja) | 2011-08-26 | 2016-01-27 | キヤノン株式会社 | 学習装置、学習装置の制御方法、検出装置、検出装置の制御方法、およびプログラム |
US9111346B2 (en) * | 2011-09-13 | 2015-08-18 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and recording medium |
JP5896661B2 (ja) | 2011-09-14 | 2016-03-30 | キヤノン株式会社 | 情報処理装置、情報処理装置の制御方法、およびプログラム |
JP5886616B2 (ja) | 2011-11-30 | 2016-03-16 | キヤノン株式会社 | 物体検出装置、物体検出装置の制御方法、およびプログラム |
JP5806606B2 (ja) | 2011-12-01 | 2015-11-10 | キヤノン株式会社 | 情報処理装置、情報処理方法 |
JP5865043B2 (ja) | 2011-12-06 | 2016-02-17 | キヤノン株式会社 | 情報処理装置、情報処理方法 |
JP6026119B2 (ja) * | 2012-03-19 | 2016-11-16 | 株式会社東芝 | 生体情報処理装置 |
JP6000602B2 (ja) * | 2012-03-30 | 2016-09-28 | キヤノン株式会社 | 体検出方法及び物体検出装置 |
US8843759B2 (en) * | 2012-08-28 | 2014-09-23 | At&T Intellectual Property I, L.P. | Methods, systems, and computer program products for media-based authentication |
US9460069B2 (en) * | 2012-10-19 | 2016-10-04 | International Business Machines Corporation | Generation of test data using text analytics |
US9092697B2 (en) * | 2013-02-07 | 2015-07-28 | Raytheon Company | Image recognition system and method for identifying similarities in different images |
US9141872B2 (en) | 2013-09-11 | 2015-09-22 | Digitalglobe, Inc. | Automated and scalable object and feature extraction from imagery |
JP6304999B2 (ja) * | 2013-10-09 | 2018-04-04 | アイシン精機株式会社 | 顔検出装置、方法およびプログラム |
KR20150071038A (ko) * | 2013-12-17 | 2015-06-26 | 삼성전자주식회사 | 전자 장치를 이용한 소셜 네트워크 서비스 제공 방법 및 이를 구현한 장치 |
IL231862A (en) * | 2014-04-01 | 2015-04-30 | Superfish Ltd | Image representation using a neural network |
US9614724B2 (en) | 2014-04-21 | 2017-04-04 | Microsoft Technology Licensing, Llc | Session-based device configuration |
US9639742B2 (en) * | 2014-04-28 | 2017-05-02 | Microsoft Technology Licensing, Llc | Creation of representative content based on facial analysis |
US9773156B2 (en) | 2014-04-29 | 2017-09-26 | Microsoft Technology Licensing, Llc | Grouping and ranking images based on facial recognition data |
US9384334B2 (en) | 2014-05-12 | 2016-07-05 | Microsoft Technology Licensing, Llc | Content discovery in managed wireless distribution networks |
US9430667B2 (en) | 2014-05-12 | 2016-08-30 | Microsoft Technology Licensing, Llc | Managed wireless distribution network |
US9384335B2 (en) | 2014-05-12 | 2016-07-05 | Microsoft Technology Licensing, Llc | Content delivery prioritization in managed wireless distribution networks |
US10111099B2 (en) | 2014-05-12 | 2018-10-23 | Microsoft Technology Licensing, Llc | Distributing content in managed wireless distribution networks |
US9874914B2 (en) | 2014-05-19 | 2018-01-23 | Microsoft Technology Licensing, Llc | Power management contracts for accessory devices |
US10037202B2 (en) | 2014-06-03 | 2018-07-31 | Microsoft Technology Licensing, Llc | Techniques to isolating a portion of an online computing service |
US9367490B2 (en) | 2014-06-13 | 2016-06-14 | Microsoft Technology Licensing, Llc | Reversible connector for accessory devices |
US9460493B2 (en) | 2014-06-14 | 2016-10-04 | Microsoft Technology Licensing, Llc | Automatic video quality enhancement with temporal smoothing and user override |
US9373179B2 (en) | 2014-06-23 | 2016-06-21 | Microsoft Technology Licensing, Llc | Saliency-preserving distinctive low-footprint photograph aging effect |
EP3065086A1 (en) * | 2015-03-02 | 2016-09-07 | Medizinische Universität Wien | Computerized device and method for processing image data |
US9524450B2 (en) * | 2015-03-04 | 2016-12-20 | Accenture Global Services Limited | Digital image processing using convolutional neural networks |
US10049406B2 (en) | 2015-03-20 | 2018-08-14 | Bank Of America Corporation | System for sharing retirement scores between social groups of customers |
US10687711B2 (en) | 2015-05-05 | 2020-06-23 | Medizinische Universität Wien | Computerized device and method for processing image data |
US10846566B2 (en) | 2016-09-14 | 2020-11-24 | Konica Minolta Laboratory U.S.A., Inc. | Method and system for multi-scale cell image segmentation using multiple parallel convolutional neural networks |
US10360494B2 (en) * | 2016-11-30 | 2019-07-23 | Altumview Systems Inc. | Convolutional neural network (CNN) system based on resolution-limited small-scale CNN modules |
US10657424B2 (en) * | 2016-12-07 | 2020-05-19 | Samsung Electronics Co., Ltd. | Target detection method and apparatus |
KR102085334B1 (ko) * | 2017-01-19 | 2020-03-05 | 서울대학교산학협력단 | 회전된 사물 인식 방법 및 장치 |
US11804070B2 (en) * | 2019-05-02 | 2023-10-31 | Samsung Electronics Co., Ltd. | Method and apparatus with liveness detection |
CN113515981A (zh) | 2020-05-22 | 2021-10-19 | 阿里巴巴集团控股有限公司 | 识别方法、装置、设备和存储介质 |
CN114119610B (zh) * | 2022-01-25 | 2022-06-28 | 合肥中科类脑智能技术有限公司 | 基于旋转目标检测的缺陷检测方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07220090A (ja) * | 1994-02-02 | 1995-08-18 | Canon Inc | 物体認識方法 |
EP0784285A2 (en) * | 1996-01-12 | 1997-07-16 | Canon Kabushiki Kaisha | Method and apparatus for generating a classification tree |
JPH11250267A (ja) * | 1998-03-05 | 1999-09-17 | Nippon Telegr & Teleph Corp <Ntt> | 目の位置検出方法、目の位置検出装置および目の位置検出プログラムを記録した記録媒体 |
JP2001202516A (ja) * | 2000-01-19 | 2001-07-27 | Victor Co Of Japan Ltd | 個人識別装置 |
EP1262908A1 (en) * | 2001-05-31 | 2002-12-04 | Canon Kabushiki Kaisha | Pattern recognition apparatus for detecting predetermined pattern contained in input signal |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2767814B2 (ja) | 1988-06-14 | 1998-06-18 | 日本電気株式会社 | 顔画像検出方法及び装置 |
DE4028191A1 (de) * | 1990-09-05 | 1992-03-12 | Philips Patentverwaltung | Schaltungsanordnung zum erkennen eines menschlichen gesichtes |
CA2107553C (en) * | 1991-04-05 | 2001-07-31 | Nancy Lin | Monoclonal antibodies to stem cell factor receptors |
JPH0711819A (ja) | 1992-01-16 | 1995-01-13 | Renko Ko | ドアロック用調整プレートアセンブリ |
JP2973676B2 (ja) | 1992-01-23 | 1999-11-08 | 松下電器産業株式会社 | 顔画像特徴点抽出装置 |
JP2573126B2 (ja) | 1992-06-22 | 1997-01-22 | 正重 古川 | 表情のコード化及び情緒の判別装置 |
JPH08147469A (ja) | 1994-11-18 | 1996-06-07 | Ricoh Co Ltd | カラー画像認識方法 |
JPH0944676A (ja) | 1995-08-01 | 1997-02-14 | Toyota Motor Corp | 顔面検出装置 |
JP3279913B2 (ja) | 1996-03-18 | 2002-04-30 | 株式会社東芝 | 人物認証装置、特徴点抽出装置及び特徴点抽出方法 |
JPH1011543A (ja) | 1996-06-27 | 1998-01-16 | Matsushita Electric Ind Co Ltd | パターン認識用辞書作成装置及びパターン認識装置 |
JPH1115973A (ja) | 1997-06-23 | 1999-01-22 | Mitsubishi Electric Corp | 画像認識装置 |
JPH11283036A (ja) | 1998-03-30 | 1999-10-15 | Toshiba Tec Corp | 対象物検出装置及び対象物検出方法 |
KR100343223B1 (ko) | 1999-12-07 | 2002-07-10 | 윤종용 | 화자 위치 검출 장치 및 그 방법 |
US7054850B2 (en) | 2000-06-16 | 2006-05-30 | Canon Kabushiki Kaisha | Apparatus and method for detecting or recognizing pattern by employing a plurality of feature detecting elements |
JP2002358523A (ja) | 2001-05-31 | 2002-12-13 | Canon Inc | パターン認識処理装置及びその方法、画像入力装置 |
EP2955662B1 (en) | 2003-07-18 | 2018-04-04 | Canon Kabushiki Kaisha | Image processing device, imaging device, image processing method |
US8209172B2 (en) | 2003-12-16 | 2012-06-26 | Canon Kabushiki Kaisha | Pattern identification method, apparatus, and program |
JP5008269B2 (ja) | 2005-04-08 | 2012-08-22 | キヤノン株式会社 | 情報処理装置、情報処理方法 |
-
2003
- 2003-12-16 AU AU2003289116A patent/AU2003289116A1/en not_active Abandoned
- 2003-12-16 WO PCT/JP2003/016095 patent/WO2004055735A1/ja active Application Filing
- 2003-12-16 US US10/539,882 patent/US7577297B2/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07220090A (ja) * | 1994-02-02 | 1995-08-18 | Canon Inc | 物体認識方法 |
EP0784285A2 (en) * | 1996-01-12 | 1997-07-16 | Canon Kabushiki Kaisha | Method and apparatus for generating a classification tree |
JPH11250267A (ja) * | 1998-03-05 | 1999-09-17 | Nippon Telegr & Teleph Corp <Ntt> | 目の位置検出方法、目の位置検出装置および目の位置検出プログラムを記録した記録媒体 |
JP2001202516A (ja) * | 2000-01-19 | 2001-07-27 | Victor Co Of Japan Ltd | 個人識別装置 |
EP1262908A1 (en) * | 2001-05-31 | 2002-12-04 | Canon Kabushiki Kaisha | Pattern recognition apparatus for detecting predetermined pattern contained in input signal |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI647660B (zh) * | 2016-12-15 | 2019-01-11 | 歐姆龍股份有限公司 | 條狀區域檢測裝置、條狀區域檢測方法及其程式的記錄媒體 |
CN110751134A (zh) * | 2019-12-23 | 2020-02-04 | 长沙智能驾驶研究院有限公司 | 目标检测方法、存储介质及计算机设备 |
Also Published As
Publication number | Publication date |
---|---|
US20060204053A1 (en) | 2006-09-14 |
AU2003289116A1 (en) | 2004-07-09 |
US7577297B2 (en) | 2009-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004055735A1 (ja) | パターン識別方法、その装置及びそのプログラム | |
EP1650711B1 (en) | Image processing device, imaging device, image processing method | |
JP4868530B2 (ja) | 画像認識装置 | |
CN111274916B (zh) | 人脸识别方法和人脸识别装置 | |
EP2678824B1 (en) | Determining model parameters based on transforming a model of an object | |
JP4532915B2 (ja) | パターン認識用学習方法、パターン認識用学習装置、画像入力装置、コンピュータプログラム、及びコンピュータ読み取り可能な記録媒体 | |
US8254644B2 (en) | Method, apparatus, and program for detecting facial characteristic points | |
CN112686812B (zh) | 银行卡倾斜矫正检测方法、装置、可读存储介质和终端 | |
CN109766873B (zh) | 一种混合可变形卷积的行人再识别方法 | |
CN108416291B (zh) | 人脸检测识别方法、装置和系统 | |
JP5574033B2 (ja) | 画像認識システム及びその認識方法並びにプログラム | |
CN111784747A (zh) | 一种基于关键点检测和校正的车辆多目标跟踪系统及方法 | |
CN111626295B (zh) | 车牌检测模型的训练方法和装置 | |
CN111507908B (zh) | 图像矫正处理方法、装置、存储介质及计算机设备 | |
JP4993615B2 (ja) | 画像認識方法および装置 | |
CN111192194A (zh) | 一种针对幕墙建筑立面的全景图像拼接方法 | |
CN111639580A (zh) | 一种结合特征分离模型和视角转换模型的步态识别方法 | |
CN112686248B (zh) | 证件增减类别检测方法、装置、可读存储介质和终端 | |
Cai et al. | Feature detection and matching with linear adjustment and adaptive thresholding | |
JP4298283B2 (ja) | パターン認識装置、パターン認識方法、及びプログラム | |
CN114927236A (zh) | 一种面向多重目标图像的检测方法及系统 | |
CN114332814A (zh) | 一种停车框识别方法、装置、电子设备及存储介质 | |
CN114241194A (zh) | 一种基于轻量级网络的仪表识别及读数方法 | |
JP4493448B2 (ja) | 対象物識別装置および方法並びにプログラム | |
JP4266798B2 (ja) | パターン検出装置及びパターン検出方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
122 | Ep: pct application non-entry in european phase | ||
WWE | Wipo information: entry into national phase |
Ref document number: 10539882 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 10539882 Country of ref document: US |