WO2013073622A1 - 局所特徴量抽出装置、局所特徴量抽出方法、及びプログラム - Google Patents
局所特徴量抽出装置、局所特徴量抽出方法、及びプログラム Download PDFInfo
- Publication number
- WO2013073622A1 WO2013073622A1 PCT/JP2012/079673 JP2012079673W WO2013073622A1 WO 2013073622 A1 WO2013073622 A1 WO 2013073622A1 JP 2012079673 W JP2012079673 W JP 2012079673W WO 2013073622 A1 WO2013073622 A1 WO 2013073622A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature
- local
- region
- sub
- feature point
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
Definitions
- the present invention relates to a local feature quantity extraction device, a local feature quantity extraction method, and a program.
- Patent Literature 1 and Non-Patent Literature 1 disclose local feature quantity extraction devices using SIFT (Scale Invariant Feature Transform) feature quantities.
- FIG. 23 is a diagram illustrating an example of a general configuration of a local feature quantity extraction device using a SIFT feature quantity.
- FIG. 24 is a diagram showing an image of SIFT feature amount extraction in the local feature amount extraction apparatus shown in FIG.
- the local feature quantity extraction apparatus includes a feature point detection unit 200, a local region acquisition unit 210, a sub-region division unit 220, and a sub-region feature vector generation unit 230.
- the feature point detection unit 200 detects a large number of characteristic points (feature points) from the image, and outputs the coordinate position, scale (size), and angle of each feature point.
- the local region acquisition unit 210 acquires a local region where feature amount extraction is performed from the detected coordinate value, scale, and angle of each feature point.
- the sub area dividing unit 220 divides the local area into sub areas. In the example shown in FIG. 24, the sub-region dividing unit 220 divides the local region into 16 blocks (4 ⁇ 4 blocks).
- the sub-region feature vector generation unit 230 generates a gradient direction histogram for each sub-region of the local region. Specifically, the sub-region feature vector generation unit 230 calculates a gradient direction for each pixel in each sub-region and quantizes it in eight directions. In addition, the gradient direction calculated
- Patent Document 2 in order to improve the search accuracy and recognition accuracy using local feature values, local features are extracted as feature points having high reproducibility even if the image is rotated, enlarged, or reduced. A technique for narrowing down the calculation target of the feature amount is disclosed.
- the above-mentioned local feature amount has a problem that its size becomes large. For example, in the case of the SIFT feature value, if the histogram value of each dimension is represented by 1 byte, a size of 128 dimensions ⁇ 1 byte is required. In the methods disclosed in Patent Document 1 and Non-Patent Document 1, local feature amounts are generated for all feature points extracted from an input image. For this reason, as the number of detected feature points increases, the size of the generated local feature amount increases.
- the size of the local feature amount becomes large in this way, a problem may occur when the local feature amount is used for image matching (matching). For example, when a user terminal (such as a mobile terminal with a camera) extracts a local feature amount from an image and transmits the local feature amount to a server to search for an image similar to the image, the size of the local feature amount If the value is large, the communication time becomes long. Therefore, it takes a long time to obtain an image search result. Moreover, if the size of the local feature amount is large, the processing time for collating the local feature amount of the image becomes long. In the case of an image search using local feature amounts, the local feature amount of the image is stored in the memory. However, if the size of the local feature amount is large, the number of images that can store the local feature amount in the memory is large. It will decrease. Therefore, it is not suitable for a large-scale search for a large amount of images.
- Patent Document 2 Although the target of calculation of the local feature amount can be narrowed down to feature points with high reproducibility, if the number of feature points with high reproducibility is large, Patent Document 2 1 and the same problem as that disclosed in Non-Patent Document 1.
- an object of the present invention is to reduce the size of the feature amount while maintaining the accuracy of subject identification.
- a local feature amount extraction device detects a plurality of feature points in an image and outputs feature point information that is information about each feature point, and based on the feature point information
- a feature point selection unit that selects a predetermined number of feature points from a plurality of detected feature points in order of importance; a local region acquisition unit that acquires a local region for each selected feature point; and each local region A sub-region dividing unit that divides the region into a plurality of sub-regions, a sub-region feature vector generating unit that generates a multi-dimensional feature vector for each sub-region in each local region, And a dimension selection unit that selects a dimension from the feature vector for each sub-region so that the correlation between adjacent sub-regions is low, and outputs an element of the selected dimension as a feature amount of the local region.
- the computer detects a plurality of feature points in the image, outputs feature point information that is information about each feature point, and based on the feature point information Selecting a predetermined number of feature points in order of importance from a plurality of detected feature points, acquiring a local region for each selected feature point, dividing each local region into a plurality of sub-regions, A multidimensional feature vector is generated for each sub-region in the local region, and based on the positional relationship of the sub-regions in each local region, the feature vector is determined for each sub-region so that the correlation between adjacent sub-regions is low. A dimension is selected, and an element of the selected dimension is output as a local region feature amount.
- a program detects a plurality of feature points in an image on a computer and outputs feature point information, which is information about each feature point, and is detected based on the feature point information.
- a function for selecting a predetermined number of feature points in order of importance a function for acquiring a local region for each selected feature point, and dividing each local region into a plurality of sub-regions Based on the function, the function to generate multi-dimensional feature vectors for each sub-region in each local region, and the positional relationship of the sub-regions in each local region, This is to realize a function of selecting a dimension from a feature vector for each region and outputting an element of the selected dimension as a feature amount of a local region.
- the “unit” does not simply mean a physical means, but includes a case where the function of the “unit” is realized by software. Also, even if the functions of one “unit” or device are realized by two or more physical means or devices, the functions of two or more “units” or devices are realized by one physical means or device. May be.
- FIG. 1 It is a figure which shows the structure of the local feature-value extraction apparatus which is the 1st Embodiment of this invention. It is a figure which shows the structural example of a feature point selection part. It is a figure which shows the structural example of a feature point selection part. It is a figure which shows the structural example of a feature point selection part. It is a figure which shows the structural example of a feature point selection part. It is a figure which shows an example in the case of selecting a dimension from a 128-dimensional feature vector. It is a figure which shows another example of dimension selection. It is a figure which shows an example of the priority at the time of selecting a dimension. It is a figure which shows an example of the priority of the dimension of a gradient direction histogram.
- FIG. 1 is a diagram showing a configuration of a local feature quantity extraction apparatus according to the first embodiment of the present invention.
- the local feature quantity extraction device 1 ⁇ / b> A includes a feature point detection unit 10, a feature point selection unit 12, and a local feature quantity generation unit 14.
- the local feature quantity extraction device 1A can be configured using an information processing device such as a personal computer or a portable information terminal, for example.
- each part which comprises 1 A of local feature-value extraction apparatuses is realizable by using storage areas, such as memory, for example, or a processor running the program stored in the storage area. In addition, it is realizable similarly about the component in other embodiment mentioned later.
- the feature point detection unit 10 detects a large number of characteristic points (feature points) from the image, and outputs feature point information that is information about each feature point.
- the feature point information includes, for example, the coordinate position and scale of the detected feature point, the orientation of the feature point, and a “feature point number” that is a unique ID (Identification) assigned to the feature point. It is shown.
- the feature point detection unit 10 may output the feature point information as separate feature point information for each orientation direction of each feature point. For example, the feature point detection unit 10 may output the feature point information only for the most main orientation direction at each feature point, or may output the feature point information for the second and subsequent main orientation directions. Also good. Further, when the feature point information about the second and subsequent main orientation directions is also output, the feature point detection unit 10 can assign a different feature point number for each orientation direction at each feature point.
- the feature point detection target image may be either a still image or a moving image (including a short clip).
- an image captured by an imaging device such as a digital camera, a digital video camera, or a mobile phone may be used.
- the image may be a compressed image such as JPEG (Joint Photographic Experts Group), or may be an uncompressed image such as TIFF (Tagged Image File Format).
- JPEG Joint Photographic Experts Group
- TIFF Tagged Image File Format
- the image may be a compressed movie or a decoded movie.
- the feature point detection part 10 can detect a feature point for every frame image which comprises a moving image. If the image is a compressed video, the compression format is MPEG (Moving Picture Experts Group), MOTION JPEG, WINDOWSWMedia Video (WINDOWS, WINDOWS Media is a registered trademark), etc. Anything is fine.
- the feature point detection unit 10 can use, for example, DoG (Difference-of-Gaussian) processing when detecting feature points from an image and extracting feature point information. Specifically, the feature point detection unit 10 can determine the position and scale of the feature point by searching for extreme values in the scale space using DoG processing. Furthermore, the feature point detection unit 10 can calculate the orientation of each feature point using the determined position and scale of the feature point and the gradient information of the surrounding area. Note that the feature point detection unit 10 may use other methods such as Fast-Hessian Detector instead of DoG when detecting feature points from an image and extracting feature point information.
- DoG Difference-of-Gaussian
- the feature point selection unit 12 selects a specified number (predetermined number) of feature points in order of importance from the detected feature points based on the feature point information output from the feature point detection unit 10. Then, the feature point selection unit 12 outputs the feature point number of the selected feature point, information indicating the order of importance, and the like as the feature point selection result.
- the feature point selection unit 12 can hold, for example, designated number information indicating the “designated number” of feature points to be selected in advance.
- the specified number information may be defined in the program, for example, or may be stored in a table or the like referred to by the program.
- the specified number information may be information indicating the specified number itself or information indicating the total size (for example, the number of bytes) of local feature amounts in the image.
- the feature point selection unit 12 specifies, for example, by dividing the total size by the size of the local feature amount at one feature point. A number can be calculated.
- the feature point selection unit 12 may include a high-scale order feature point selection unit 20.
- the high scale order feature point selection unit 20 can select a specified number of feature points in descending order of scale based on the feature point information output from the feature point detection unit 10.
- the high-scale order feature point selection unit 20 rearranges the feature points in the order of the scales of all feature points based on the feature point information, and assigns importance in order from the feature points with the largest scale.
- the high-scale order feature point selection unit 20 selects feature points in descending order of importance, and outputs information about the selected feature points as a selection result when the feature points are selected by the designated number.
- the high-scale order feature point selection unit 20 can output, for example, a feature point number uniquely assigned to each feature point as a selection result.
- the high-scale order feature point selection unit 20 selects feature points in descending order of scale, and as a result, it is possible to select feature points in a wide range of scales.
- By selecting feature points with a wide scale in this way it is possible to deal with a wide range of fluctuations in the size of an object appearing in an image.
- the feature point selection unit 12 may include a feature point classification unit 22 and a representative feature point selection unit 24.
- the feature point classifying unit 22 can classify the plurality of detected feature points into a plurality of groups based on the feature point information.
- the representative feature point selection unit 24 can select a specified number of feature points by selecting at least one feature point from each group.
- the feature point classifying unit 22 calculates the density of feature points in the spatial direction using, for example, information on the coordinate positions of the feature points included in the feature point information. Then, the feature point classifying unit 22 groups feature points having close coordinate positions, and assigns a unique identifier to each group, so that information indicating which identifier each feature point belongs to is assigned to the spatial direction feature. It can be output as point density information.
- the representative feature point selection unit 24 can output information on the selected feature points as a selection result by selecting a specified number of feature points based on the spatial direction feature point density information. For example, when the representative feature point selection unit 24 receives information indicating which group each feature point belongs to as spatial direction feature point density information, the representative feature point selection unit 24 selects the feature point having the largest scale in each group. Alternatively, a feature point that is most isolated in each group (for example, a feature point having the maximum sum of distances from all feature points existing in the group) may be selected.
- the representative feature point selection unit 24 may determine that the feature points selected from the group having a small number of feature points have high importance, and the feature points selected from the group having a large number of feature points have low importance.
- the representative feature point selection unit 24 reduces the feature points to the specified number based on the importance, for example, and information on the selected feature points Can be output as a selection result. At this time, the representative feature point selection unit 24 may select feature points in descending order of importance.
- the representative feature point selection unit 24 selects feature points one by one in order from, for example, the group having the smallest number of feature points. Also good.
- feature points detected from an image may be concentrated in a specific region in the image, and information of those feature points may include redundancy.
- the representative feature point selection unit 24 can select feature points evenly from the image by considering the density of feature points in the spatial direction. Accordingly, it is possible to reduce the number of feature points to be described in the feature amount description with almost no deterioration in accuracy in applications such as image search and object detection.
- the feature point classification method is not limited to the method based on the density of feature points in the spatial direction.
- the feature point classification unit 22 may further classify the feature points based on the orientation similarity of the feature points in the group in addition to the density of feature points in the spatial direction.
- the feature point classifying unit 22 looks at the orientation of the feature points that are closest to each other (the closest distance) among the feature points in the group classified by the density of feature points in the spatial direction. If they are similar, the same group may be used, and if they are not similar, different groups may be used.
- the feature point classifying unit 22 classifies the feature points based on the density of feature points in the spatial direction, and then classifies the feature points based on the orientation of the feature points.
- the feature points may be classified in consideration of the density and orientation similarity at the same time.
- the feature point selection unit 12 may include a feature point random selection unit 26.
- the feature point random selection unit 26 can assign importance to all feature points at random, and can select feature points in descending order of importance. And the feature point random selection part 26 can output the information regarding the selected feature point as a selection result at the time of selecting the designated number of feature points.
- the feature point random selection unit 26 assigns importance to the feature points randomly, and selects the feature points in order of importance, so that the shape of the above distribution is ideally maintained.
- Feature points can be selected.
- feature points with a wide range of scales are selected, and it is possible to deal with a wide range of fluctuations in the size of an object appearing in an image.
- the feature point selection unit 12 may include a specific scale region feature point selection unit 28.
- the specific scale region feature point selection unit 28 can select only feature points included in a specific scale region from the scales of all feature points based on the feature point information.
- the specific scale region feature point selecting unit 28 reduces the feature points to the specified number based on the importance, for example, and selects information about the selected feature points as the selection result. Can be output as At this time, the specific scale region feature point selection unit 28 may select feature points in descending order of importance.
- the specific scale region feature point selection unit 28 may determine that a feature point having a scale closer to the center of the scale region to be selected has a higher importance and select feature points in order of importance. .
- the specific scale region feature point selection unit 28 may determine that a feature point having a larger scale has a higher importance in the scale region to be selected, and may select feature points in the order of importance.
- the specific scale region feature point selection unit 28 determines, for example, that the feature point closer to the scale region has higher importance, and this scale region The feature points may be newly selected in order from the scale regions before and after.
- the size of the object on the image registered on the database side is known, and the size ratio of the object shown between the image on the query side and the image on the database side May be possible.
- the correction corresponding to the object size ratio between the query side image and the database side image with respect to the scale of those feature points It is effective to select feature points intensively from the scaled area.
- a technique for finding feature points having high importance among the feature points on the database side is arbitrary. For example, a gaze area in an image on the database side may be specified in advance using a saliency map or the like, and a feature point detected from the area may be defined as having high importance. For example, a feature point having a large scale may be defined as important.
- the local feature quantity generation unit 14 receives the feature point information output from the feature point detection unit 10 and the selection result output from the feature point selection unit 12. Then, the local feature value generation unit 14 generates (describes) a local feature value that is a feature value of the local region for each selected feature point. In addition, the local feature-value production
- the local feature value generation unit 14 can generate and output local feature values in order of importance of feature points. Moreover, the local feature-value production
- the local feature amount generation unit 14 selects the feature points corresponding to the feature point numbers as the target of feature amount generation. It can be.
- the local feature quantity generation unit 14 can be configured to include a local region acquisition unit 40, a sub region division unit 42, a sub region feature vector generation unit 44, and a dimension selection unit 46.
- the local region acquisition unit 40 acquires a local region where feature amount extraction is performed from the coordinate value, scale, and orientation of each detected feature point based on the feature amount information. Note that the local region acquisition unit 40 can acquire a local region for each piece of feature point information when a plurality of pieces of feature point information having different orientations exist for one feature point.
- the sub-region dividing unit 42 normalizes the local region by rotating it according to the orientation direction of the feature points, and then divides it into sub-regions.
- the sub-region dividing unit 42 can divide the local region into 16 blocks (4 ⁇ 4 blocks) as shown in FIG.
- the sub-region dividing unit 42 can also divide the local region into 25 blocks (5 ⁇ 5 blocks).
- the sub-region feature vector generation unit 44 generates a feature vector for each sub-region of the local region.
- a gradient direction histogram can be used as the feature vector of the sub-region.
- the sub-region feature vector generation unit 44 calculates a gradient direction for each pixel in each sub-region and quantizes it in eight directions.
- the gradient direction obtained here is a relative direction with respect to the angle of each feature point output by the feature point detection unit 10. That is, the direction is normalized with respect to the angle output by the feature point detection unit 10.
- the sub-region feature vector generation unit 44 aggregates the frequencies in the eight directions quantized for each sub-region, and generates a gradient direction histogram.
- the gradient direction is not limited to being quantized to 8 directions, but may be quantized to an arbitrary quantization number such as 4 directions, 6 directions, and 10 directions.
- floor () is a function for rounding off the decimal point
- round () is a function for rounding off
- mod is an operation for obtaining a remainder.
- the simple frequency may not be aggregated, but the gradient magnitude may be added and aggregated.
- the weight value may be added not only to the sub-region to which the pixel belongs but also to a sub-region (such as an adjacent block) that is close according to the distance between the sub-regions. .
- the weight value may be added to the gradient directions before and after the quantized gradient direction.
- the feature vector of the sub-region is not limited to the gradient direction histogram, and may be any one having a plurality of dimensions (elements) such as color information.
- a gradient direction histogram is used as the feature vector of the sub-region.
- the sub-region feature vector generation unit 44 can output the coordinate position information of the feature points in the local feature amount. Further, the sub-region feature vector generation unit 44 may output the scale information of the selected feature point by including it in the local feature amount.
- the dimension selection unit 46 selects (decimates) a dimension (element) to be output as a local feature amount based on the positional relationship between the sub-regions so that the correlation between feature vectors of adjacent sub-regions becomes low. More specifically, the dimension selection unit 46 selects a dimension so that at least one gradient direction differs between adjacent sub-regions, for example.
- adjacent sub-regions are mainly used as adjacent sub-regions, but the adjacent sub-regions are not limited to adjacent sub-regions, for example, within a predetermined distance from the target sub-region.
- a sub-region may be a nearby sub-region.
- the dimension selection unit 46 can not only simply select a dimension but also determine the priority of selection. That is, for example, the dimension selection unit 46 can select dimensions with priorities so that dimensions in the same gradient direction are not selected between adjacent sub-regions. And the dimension selection part 46 outputs the feature vector comprised from the selected dimension as a local feature-value. In addition, the dimension selection part 46 can output a local feature-value in the state which rearranged the dimension based on the priority.
- FIG. 6 is a diagram illustrating an example of selecting a dimension from feature vectors of a 128-dimensional gradient direction histogram generated by dividing a local region into 4 ⁇ 4 block sub-regions and quantizing gradient directions into eight directions. It is.
- the dimension selection unit 46 selects 64 dimensions, which is a half dimension number from 128 dimensions, dimensions in the same gradient direction are selected in adjacent left and right, upper and lower blocks (sub-regions). The dimension can be selected so that it is not.
- An example is shown.
- the gradient directions (dimensions) selected in adjacent blocks are combined, there are eight directions. In other words, the feature amounts are complemented between adjacent blocks.
- the dimension selecting unit 46 may select the dimensions so that the same gradient direction dimension is not selected even between the blocks located at an angle of 45 °. it can.
- the gradient directions (dimensions) selected in adjacent 2 ⁇ 2 blocks are combined, there are eight directions. That is, even in this case, the feature amount is complemented between adjacent blocks.
- the gradient directions are selected uniformly so that the gradient directions do not overlap between adjacent blocks.
- FIG. 7 is a diagram showing another example of dimension selection.
- the dimension selection unit 46 selects a dimension so that the same gradient direction dimension is not selected in adjacent left, right, and upper and lower blocks when selecting a half dimension from 150 dimensions to 75 dimensions. can do.
- the gradient direction selected by the adjacent block is match
- the dimension selection unit 46 selects the dimension so that only one direction is the same (the remaining one direction is different) between blocks located at an angle of 45 °. can do.
- the dimension selection unit 46 can select the dimension so that the selected gradient directions do not match between the blocks located at an angle of 45 ° (see FIG. 7).
- the dimension selection unit 46 selects one gradient direction from each sub-region from the first dimension to the 25th dimension, selects two gradient directions from the 26th dimension to the 50th dimension, and from the 51st dimension to the 75th dimension. Three gradient directions are selected.
- FIG. 9 is a diagram illustrating an example of element numbers of 150-dimensional feature vectors.
- FIG. 10 is a diagram showing a configuration example of local feature amounts obtained by selecting the elements shown in FIG. 9 according to the priority order shown in FIG.
- the dimension selection unit 46 can output dimensions (elements) in the order shown in FIG. 10, for example. Specifically, for example, when outputting a 150-dimensional local feature amount, the dimension selection unit 46 can output all 150-dimensional elements in the order shown in FIG. When the dimension selection unit 46 outputs, for example, a 25-dimensional local feature amount, the elements in the first row (76th, 45th, 83rd,..., 120th) shown in FIG. It can be output in the order shown (left to right). For example, when outputting a 50-dimensional local feature amount, the dimension selection unit 46 adds the elements in the second row shown in FIG. 10 in the order shown in FIG. 10 (from left to right) in addition to the first row shown in FIG. ) Can be output.
- the dimension selection unit 46 adds the elements in the second row shown in FIG. 10 in the order shown in FIG. 10 (from left to
- the local feature amount has a hierarchical structure. That is, for example, in the 25-dimensional local feature amount and the 150-dimensional local feature amount, the arrangement of elements in the first 25-dimensional local feature amount is the same.
- the dimension selection unit 46 selects a dimension hierarchically (progressively), thereby depending on the application, communication capacity, terminal specification, etc. Feature quantities can be extracted and output.
- the dimension selection unit 46 can hierarchically select dimensions, rearrange the dimensions based on the priority order, and output them, thereby performing image matching using local feature amounts of different dimensions. . For example, when images are collated using a 75-dimensional local feature value and a 50-dimensional local feature value, the distance between the local feature values can be calculated by using only the first 50 dimensions.
- the priorities shown in FIGS. 8 to 10 are merely examples, and the order of selecting dimensions is not limited to this.
- the order as shown in FIG. 11A and FIG. 11B may be used.
- the priority order may be determined so that dimensions are selected from all the sub-regions.
- the vicinity of the center of the local region may be important, and the priority order may be determined so that the selection frequency of the sub-region near the center is increased.
- the information indicating the dimension selection order may be defined in the program, for example, or may be stored in a table or the like (selection order storage unit) referred to when the program is executed.
- the dimension selection unit 46 may perform selection as shown in FIG. 12 (a) and FIG. 12 (b). In this case, 6 dimensions are selected in a certain sub-area, and 0 dimensions are selected in other sub-areas close to the sub-area. Even in such a case, it can be said that the dimension is selected for each sub-region so that the correlation between adjacent sub-regions becomes low.
- the shape of the local region and the sub region is not limited to the square as shown in FIG. 6 and FIG. 7, and can be an arbitrary shape.
- the local region acquisition unit 40 may acquire a circular local region.
- the sub-region dividing unit 42 can divide the circular local region into, for example, 9 or 17 divided sub-regions.
- the dimension selection unit 46 can select a dimension in each sub-region, for example, as shown in FIG. In the example shown in FIG. 14, when selecting 40 dimensions from 72 dimensions, no dimension thinning is performed in the central sub-region.
- FIG. 15 is a flowchart illustrating an example of processing in the local feature quantity extraction device 1A.
- the feature point detection unit 10 receives an image that is a generation target of a local feature amount (S1501).
- the feature point detection unit 10 detects feature points from the received image, and outputs feature point information including the coordinate position of the feature points, the scale, the orientation of the feature points, the feature point number, and the like (S1502).
- the feature point selection unit 12 selects a specified number of feature points in the order of importance from the detected feature points based on the feature point information, and outputs the selection result (S1503).
- the local region acquisition unit 40 acquires a local region for performing feature extraction based on the coordinate value, scale, and angle of each selected feature point (S1504). Then, the sub-region dividing unit 42 divides the local region into sub-regions (S1505). The sub-region feature vector generation unit 44 generates a gradient direction histogram for each sub-region of the local region (S1506). Finally, the dimension selection unit 46 selects a dimension (element) to be output as a local feature amount in accordance with the determined selection order (S1507).
- the dimension selection unit 46 uses the local feature quantity so that the correlation between adjacent sub-areas becomes low based on the positional relationship of the sub-areas. Select the dimension (element) to output. That is, since the adjacent subregions often have a high correlation, the dimension selection unit 46 can select the dimensions so that the dimension (element) of the same feature vector is not selected from the adjacent subregions. This makes it possible to reduce the size of the feature amount while maintaining the accuracy of subject identification.
- the dimension selection unit 46 can output local feature values hierarchically (progressive). Thereby, even if it is between the local feature-values from which the selected dimension number (size of feature-value) differs, collation (distance calculation) can be performed mutually.
- the dimension selection unit 46 selects a dimension based on the positional relationship of the sub-regions, it is not necessary to perform learning when selecting a dimension. That is, general-purpose local feature extraction can be performed without depending on data (image).
- the feature point selection unit 12 selects a predetermined number of feature points in order of importance from a plurality of detected feature points based on the feature point information.
- generation part 14 produces
- the size of the local feature amount can be reduced as compared with the case of generating the local feature amount for all the detected feature points.
- the size of the local feature amount can be controlled to a size corresponding to the designated number.
- the feature points that are local feature generation targets are selected in order of importance, it is possible to maintain the accuracy of subject identification.
- the size of the local feature amount is reduced, it is possible to reduce communication time and processing time when performing an image search using the local feature amount.
- the processing order of the sub-region feature vector generation unit 44 and the dimension selection unit 46 may be switched. That is, in the local feature quantity extraction device 1A, after the dimension is selected by the dimension selection unit 46, the feature vector for the selected dimension may be generated by the sub-region feature vector generation unit 44.
- FIG. 16 is a diagram illustrating a configuration of a local feature quantity extraction device according to the second embodiment of the present invention.
- the local feature quantity extraction device 1 ⁇ / b> B includes a feature point detection unit 10, a selection number determination unit 50, a feature point selection unit 52, and a local feature quantity generation unit 54.
- the local feature quantity generation unit 54 includes a location area acquisition unit 40, a sub-region division unit 42, a sub-region feature vector generation unit 44, and a dimension selection unit 56.
- the selection number determination unit 50 is added to the local feature quantity extraction device 1A of the first embodiment.
- the feature point selection unit 12 and the dimension selection unit 46 of the local feature quantity extraction device 1A of the first embodiment are changed to the feature point selection unit 52 and the dimension selection unit 56.
- symbol is attached
- the selection number determination unit 50 can determine the number of feature points selected by the feature point selection unit 52 (number of selected feature points) and the number of dimensions selected by the dimension selection unit 56 (number of selected dimensions). For example, the selection number determination unit 50 can determine the number of feature points and the number of dimensions by receiving information indicating the number of feature points and the number of dimensions from the user. Note that the information indicating the number of feature points and the number of dimensions does not need to indicate the number of feature points or the number of dimensions themselves, and may be information indicating the search accuracy, the search speed, and the like.
- the selection number determination unit 50 when the selection number determination unit 50 receives an input requesting to increase the search accuracy, the selection number determination unit 50 sets the number of feature points and the number of dimensions so that at least one of the number of feature points and the number of dimensions increases. It may be determined. For example, when receiving an input requesting to increase the search speed, the selection number determination unit 50 determines the number of feature points and the number of dimensions so that at least one of the number of feature points and the number of dimensions is reduced. It is good.
- the selection number determination unit 50 may determine the number of feature points and the number of dimensions based on the application, communication capacity, processing specifications of the terminal, and the like in which the local feature quantity extraction device 1B is used. Specifically, for example, the selection number determination unit 50 has at least the number of feature points and the number of dimensions when the communication capacity is small (communication speed is slow) as compared with the case where the communication capacity is large (communication speed is high). The number of feature points and the number of dimensions may be determined so that one of them decreases. In addition, for example, when the processing specification of the terminal is low, the selection number determination unit 50 determines the number of feature points and the number of dimensions so that at least one of the number of feature points and the number of dimensions is smaller than when the processing specification is high. It is good to do. In addition, the selection number determination unit 50 may dynamically determine the number of feature points and the number of dimensions according to the processing load of the terminal, for example.
- the feature point selection unit 52 can select feature points in the same manner as the feature point selection unit 12 of the first embodiment based on the number of feature points determined by the selection number determination unit 50.
- the dimension selection unit 56 can select the dimension of the feature vector based on the number of dimensions determined by the selection number determination unit 50 and output it as a local feature amount in the same manner as the dimension selection unit 46 of the first embodiment. it can.
- the selection number determination unit 50 determines the number of feature points selected by the feature point selection unit 52 and the number of dimensions selected by the dimension selection unit 56. Can be determined. This makes it possible to determine the appropriate number of feature points and dimensions based on user input, communication capacity, terminal processing specifications, and the like. This makes it possible to control the feature size to a desired size while maintaining the accuracy of subject identification.
- FIG. 17 is a diagram showing a configuration of a local feature quantity extraction device according to the third embodiment of the present invention.
- the local feature quantity extraction device 1 ⁇ / b> C includes a feature point detection unit 10, a selection number determination unit 60, a feature point selection unit 52, and a local feature quantity generation unit 54.
- the local feature quantity generation unit 54 includes a location area acquisition unit 40, a sub-region division unit 42, a sub-region feature vector generation unit 44, and a dimension selection unit 56.
- the selection number determination unit 50 of the local feature quantity extraction device 1B of the second embodiment is changed to the selection number determination unit 60.
- symbol is attached
- the selection number determination unit 60 makes the size of the feature amount of the entire image become the specified feature amount size based on the specified feature amount size information that is information for specifying the size (total size) of the feature amount of the entire image. In addition, the number of feature points and the number of dimensions can be determined.
- the selection number determination unit 60 can determine the number of dimensions based on information that defines the correspondence between the designated feature amount size and the number of dimensions.
- FIG. 18 shows an example of information defining the correspondence between the designated feature size and the number of dimensions.
- the selection number determination unit 60 can determine the number of dimensions corresponding to the specified feature amount size by referring to information as shown in FIG. In the correspondence relationship illustrated in FIG. 18, the number of selected dimensions increases as the designated feature amount size increases.
- the correspondence relationship is not limited thereto. For example, it may be a correspondence relationship in which a fixed number of dimensions is associated regardless of the specified feature amount size.
- the information defining the correspondence relationship may be defined in the program, for example, or may be stored in a table or the like referred to by the program.
- the selection number determination unit 60 can determine the number of selected feature points based on the specified feature amount size and the determined number of dimensions so that the feature amount size becomes the specified feature amount size.
- the selection number determining unit 60 designates the feature value size including the description size of the additional information. The number of feature points can be determined so that the feature amount size is obtained.
- the selection number determination unit 60 can determine the number of dimensions again based on the selection result in the feature point selection unit 52. For example, if the input image is an image with few features, there may be few feature points that can be detected in the first place. Therefore, the number of feature points selected by the feature point selection unit 52 may not reach the number of feature points determined by the selection number determination unit 60. In such a case, the selection number determination unit 60 receives information on the number of feature points selected by the feature point selection unit 52 from the feature point selection unit 52 so that the specified feature amount size is obtained by the selected number of feature points. The number of dimensions can be re-determined. The same applies to other patterns described later in the third embodiment.
- the selection number determination unit 60 can determine the number of feature points based on information that defines the correspondence between the designated feature amount size and the number of feature points.
- the information defining the correspondence can be defined in the same manner as the information defining the correspondence between the designated feature size and the number of dimensions shown in FIG.
- the correspondence relationship can be defined so that the larger the designated feature amount size, the larger the number of selected feature points.
- the correspondence relationship is not limited to this, and may be a correspondence relationship in which a fixed number of feature points is associated regardless of the specified feature amount size.
- the selection number determination unit 60 can determine the number of selected dimensions so that the feature amount size becomes the specified feature amount size based on the specified feature amount size and the determined number of feature points.
- the selection number determining unit 60 designates the feature value size including the description size of the additional information.
- the number of dimensions can be determined so as to be the feature size.
- the selection number determination unit 60 can determine the number of feature points and the number of dimensions based on information that defines the correspondence between the specified feature amount size, the number of feature points, and the number of dimensions.
- the information defining the correspondence can be defined in the same manner as the information defining the correspondence between the designated feature size and the number of dimensions shown in FIG. For example, it is possible to define the correspondence so that the larger the designated feature amount size, the greater the number of feature points or dimensions selected.
- the correspondence relationship is not limited to this, and may be, for example, a correspondence relationship in which a fixed number of feature points or dimensions is associated regardless of the specified feature amount size.
- the selection number determining unit 60 designates the feature value size including the description size of the additional information. The number of feature points and the number of dimensions can be determined so that the feature amount size is obtained.
- the selection number determination unit 60 calculates the number of feature points selected by the feature point selection unit 52 and the number of dimensions selected by the dimension selection unit 56. And can be determined based on the specified feature amount size information. This makes it possible to control the feature size to a desired size while maintaining the accuracy of subject identification.
- FIG. 19 is a diagram illustrating a configuration of a local feature quantity extraction device according to the fourth embodiment of the present invention.
- the local feature quantity extraction device 1 ⁇ / b> D includes a feature point detection unit 10, a selection number determination unit 70, a feature point selection unit 72, and a local feature quantity generation unit 54.
- the local feature quantity generation unit 54 includes a location area acquisition unit 40, a sub-region division unit 42, a sub-region feature vector generation unit 44, and a dimension selection unit 56.
- the selection number determination unit 60 and the feature point selection unit 52 of the local feature quantity extraction device 1C of the third embodiment are replaced with the selection number determination unit 70 and the feature point selection unit 72, respectively. has been edited.
- symbol is attached
- the feature point selection unit 72 selects feature points based on the feature point information output from the feature point detection unit 10, similarly to the feature point selection unit 12 of the first embodiment. Then, the feature point selection unit 72 outputs information indicating the number of selected feature points to the selection number determination unit 70.
- the selection number determination unit 70 can receive the specified feature amount size information, similarly to the selection number determination unit 60 of the third embodiment. Then, the selection number determination unit 70 determines the dimension of the feature amount of the entire image to be the specified feature amount size based on the specified feature amount size information and the number of feature points selected by the feature point selection unit 72. The number can be determined.
- the selection number determination unit 70 specifies the feature value size including the description size of the additional information. The number of dimensions can be determined so as to be the feature size.
- the selection number determination unit 70 selects a dimension based on the designated feature amount size information and the number of feature points selected by the feature point selection unit 72.
- the number of dimensions selected in part 56 can be determined. This makes it possible to control the feature size to a desired size while maintaining the accuracy of subject identification.
- FIG. 20 is a diagram illustrating a configuration of a local feature quantity extraction device according to the fifth embodiment of the present invention.
- the local feature quantity extraction device 1 ⁇ / b> E includes a feature point detection unit 10, a selection number determination unit 80, a feature point selection unit 82, and a local feature quantity generation unit 54.
- the local feature quantity generation unit 54 includes a location area acquisition unit 40, a sub-region division unit 42, a sub-region feature vector generation unit 44, and a dimension selection unit 56.
- the selection number determination unit 50 and the feature point selection unit 52 of the local feature quantity extraction device 1B of the second embodiment are replaced with the selection number determination unit 80 and the feature point selection unit 82, respectively. has been edited.
- symbol is attached
- the feature point selection unit 82 selects feature points based on the feature point information output from the feature point detection unit 10 as in the feature point selection unit 12 of the first embodiment. Then, the feature point selection unit 82 outputs importance level information indicating the importance level of each selected feature point to the selection number determination unit 80.
- the selection number determination unit 80 can determine the number of dimensions selected by the dimension selection unit 56 for each feature point based on the importance information output from the feature point selection unit 82. For example, the selection number determination unit 80 can determine the number of dimensions such that the feature point with the higher importance has a larger number of dimensions to be selected.
- the selection number determination unit 80 accepts the specified feature amount size information, so that the size of the feature amount of the entire image becomes the specified feature amount size.
- the number may be determined. Specifically, for example, the number-of-selection determination unit 80 increases the number of dimensions selected for feature points with higher importance, and the number of dimensions so that the size of the feature amount of the entire image becomes the specified feature amount size. May be determined.
- the selection number determining unit 80 designates the feature value size including the description size of the additional information.
- the number of dimensions can be determined so as to be the feature size.
- the selection number determination unit 80 selects the dimension selection unit 56 based on the importance for each feature point selected by the feature point selection unit 82.
- the number of dimensions to be determined can be determined for each feature point. This makes it possible to control the feature size to a desired size while maintaining the accuracy of subject identification.
- FIG. 21 is a diagram showing an example of a collation system to which the local feature quantity extraction device shown in the first to fifth embodiments can be applied.
- the collation system includes a collation device 100, a feature amount database (DB: Database) 110, and a local feature amount extraction device 120.
- DB Database
- the collation device 100 is input to the local feature amount extraction device 120 by collating the local feature amount included in the search request transmitted from the local feature amount extraction device 120 with the local feature amount stored in the feature amount DB 110. It is possible to search for an image including a subject similar to the subject in the selected image.
- the feature DB 110 stores local feature values extracted from a plurality of images in association with the source image.
- the local feature amount stored in the feature amount DB 110 can be, for example, a 150-dimensional feature vector output in the order shown in FIG.
- the local feature quantity extraction device 120 can use the local feature quantity extraction device shown in any one of the first to fifth embodiments.
- the local feature quantity extraction device 120 generates a search request including the local feature quantity of the feature point detected in the input image and transmits it to the matching device 100.
- the collation apparatus 100 determines an image similar to the input image by collating the received local feature quantity with the local feature quantity stored in the feature quantity DB 110. Then, the collation device 100 outputs information indicating an image determined to be similar to the input image to the local feature amount extraction device 120 as a search result.
- the local feature quantity extraction device 120 is configured so that the correlation between adjacent sub-regions becomes low based on the positional relationship of the sub-regions. A dimension (element) to be output as a local feature amount is selected. Accordingly, the size of the local feature amount can be reduced while maintaining the matching accuracy in the matching device 100.
- the matching apparatus 100 can execute the matching process by using local feature values up to the number of dimensions common to each other.
- the collation apparatus 100 can perform collation by using local feature amounts of up to 50 dimensions. That is, for example, even if the number of dimensions of the local feature quantity is changed according to the processing capability of the local feature quantity extraction apparatus 120, the matching process is executed in the matching apparatus 100 using the local feature quantity having the changed number of dimensions. Is possible.
- this embodiment is for making an understanding of this invention easy, and is not for limiting and interpreting this invention.
- the present invention can be changed / improved without departing from the spirit thereof, and the present invention includes equivalents thereof.
- the feature point detection part which detects the several feature point in an image, and outputs the feature point information which is the information regarding each feature point, The said several feature point detected based on the said feature point information
- a feature point selection unit that selects a predetermined number of feature points in order of importance, a local region acquisition unit that acquires a local region for each selected feature point, and each local region is divided into a plurality of sub-regions Based on the positional relationship between the sub-region dividing unit, the sub-region feature vector generating unit that generates a multidimensional feature vector for each sub-region in each local region, and the sub-regions in each local region, between adjacent sub-regions
- a dimension selection unit that selects a dimension from the feature vector for each sub-region so that the correlation between the sub-regions is low and outputs an element of the selected dimension as a feature amount of the local region.
- the selection number determination part which determines the number of feature points selected in the said feature point selection part, and the number of dimensions selected in the said dimension selection part A local feature quantity extraction device further provided.
- the selection number determination part is designation
- designated feature-value size information which is information for designating the total size of the feature-value of the selected feature point And determining the number of feature points and the number of dimensions based on the specified feature amount size information.
- the said selection number determination part is based on the information which shows the correspondence of the said total size and the said number of dimensions, and the said designated feature-value size information.
- a local feature quantity extraction device that determines the number of feature points and the number of dimensions.
- the said selection number determination part is based on the information which shows the correspondence of the said total size and the said number of feature points, and the said designated feature-value size information
- a local feature quantity extraction device that determines the number of feature points and the number of dimensions.
- the said dimension selection part outputs the element of the selected dimension as a feature-value of a local area in order of the dimension selected according to the said selection order.
- Local feature extraction device (Supplementary note 13)
- the local feature quantity extraction device according to any one of supplementary notes 1 to 12, wherein the feature point information includes scale information indicating a scale of each feature point, and the feature point selection unit includes: A local feature quantity extraction device that selects the predetermined number of feature points in order of importance according to a scale from the plurality of detected feature points based on the scale information.
- a computer detects a plurality of feature points in an image, outputs feature point information that is information about each feature point, and based on the feature point information, Select a predetermined number of feature points in order of importance, obtain local regions for each selected feature point, divide each local region into multiple sub-regions, and multi-dimensional for each sub-region within each local region Based on the positional relationship of the sub-regions within each local region, a dimension is selected from the feature vector for each sub-region so that the correlation between adjacent sub-regions is low, and the selected dimension
- a local feature quantity extraction method for outputting the elements of as a local area feature quantity.
- a function of detecting a plurality of feature points in an image and outputting feature point information that is information about each feature point to a computer, and the plurality of feature points detected based on the feature point information A function for selecting a predetermined number of feature points in order of importance, a function for acquiring a local region for each selected feature point, a function for dividing each local region into a plurality of sub-regions, and each local region Based on the function of generating a multidimensional feature vector for each sub-region and the positional relationship of the sub-regions in each local region, the feature vector for each sub-region so that the correlation between adjacent sub-regions is low
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
図1は、本発明の第1の実施形態である局所特徴量抽出装置の構成を示す図である。局所特徴量抽出装置1Aは、特徴点検出部10、特徴点選定部12、及び局所特徴量生成部14を含んで構成される。局所特徴量抽出装置1Aは、例えば、パーソナルコンピュータや携帯情報端末等の情報処理装置を用いて構成することができる。そして、局所特徴量抽出装置1Aを構成する各部は、例えば、メモリ等の記憶領域を用いたり、記憶領域に格納されているプログラムをプロセッサが実行したりすることにより実現することができる。なお、後述する他の実施形態における構成要素についても同様に実現することができる。
次に第2の実施形態について説明する。図16は、本発明の第2の実施形態における局所特徴量抽出装置の構成を示す図である。図16に示すように、局所特徴量抽出装置1Bは、特徴点検出部10、選定数決定部50、特徴点選定部52、及び局所特徴量生成部54を含んで構成される。また、局所特徴量生成部54は、所領域取得部40、サブ領域分割部42、サブ領域特徴ベクトル生成部44、及び次元選定部56を含んで構成される。このように、局所特徴量抽出装置1Bでは、第1の実施形態の局所特徴量抽出装置1Aに選定数決定部50が追加されている。また、局所特徴量抽出装置1Bでは、第1の実施形態の局所特徴量抽出装置1Aの特徴点選定部12及び次元選定部46が、特徴点選定部52及び次元選定部56に変更されている。なお、第1の実施形態と同一の構成要素については同一の符号を付して説明を省略する。
次に第3の実施形態について説明する。図17は、本発明の第3の実施形態における局所特徴量抽出装置の構成を示す図である。図17に示すように、局所特徴量抽出装置1Cは、特徴点検出部10、選定数決定部60、特徴点選定部52、及び局所特徴量生成部54を含んで構成される。また、局所特徴量生成部54は、所領域取得部40、サブ領域分割部42、サブ領域特徴ベクトル生成部44、及び次元選定部56を含んで構成される。このように、局所特徴量抽出装置1Cでは、第2の実施形態の局所特徴量抽出装置1Bの選定数決定部50が、選定数決定部60に変更されている。なお、第2の実施形態と同一の構成要素については同一の符号を付して説明を省略する。
次に第4の実施形態について説明する。図19は、本発明の第4の実施形態における局所特徴量抽出装置の構成を示す図である。図19に示すように、局所特徴量抽出装置1Dは、特徴点検出部10、選定数決定部70、特徴点選定部72、及び局所特徴量生成部54を含んで構成される。また、局所特徴量生成部54は、所領域取得部40、サブ領域分割部42、サブ領域特徴ベクトル生成部44、及び次元選定部56を含んで構成される。このように、局所特徴量抽出装置1Dでは、第3の実施形態の局所特徴量抽出装置1Cの選定数決定部60及び特徴点選定部52が、選定数決定部70及び特徴点選定部72に変更されている。なお、第3の実施形態と同一の構成要素については同一の符号を付して説明を省略する。
次に第5の実施形態について説明する。図20は、本発明の第5の実施形態における局所特徴量抽出装置の構成を示す図である。図20に示すように、局所特徴量抽出装置1Eは、特徴点検出部10、選定数決定部80、特徴点選定部82、及び局所特徴量生成部54を含んで構成される。また、局所特徴量生成部54は、所領域取得部40、サブ領域分割部42、サブ領域特徴ベクトル生成部44、及び次元選定部56を含んで構成される。このように、局所特徴量抽出装置1Eでは、第2の実施形態の局所特徴量抽出装置1Bの選定数決定部50及び特徴点選定部52が、選定数決定部80及び特徴点選定部82に変更されている。なお、第2の実施形態と同一の構成要素については同一の符号を付して説明を省略する。
図21は、第1~第5の実施形態に示した局所特徴量抽出装置を適用可能な照合システムの一例を示す図である。図21に示すように、照合システムは、照合装置100、特徴量データベース(DB:Database)110、及び局所特徴量抽出装置120を含んで構成されている。
(付記1)画像内の複数の特徴点を検出し、各特徴点に関する情報である特徴点情報を出力する特徴点検出部と、前記特徴点情報に基づいて、検出された前記複数の特徴点の中から、重要度順に所定数の特徴点を選定する特徴点選定部と、選定された各特徴点に対する局所領域を取得する局所領域取得部と、各局所領域を複数のサブ領域に分割するサブ領域分割部と、各局所領域内のサブ領域ごとに複数次元の特徴ベクトルを生成するサブ領域特徴ベクトル生成部と、各局所領域内のサブ領域の位置関係に基づいて、近接するサブ領域間の相関が低くなるようにサブ領域ごとに前記特徴ベクトルから次元を選定し、選定された次元の要素を局所領域の特徴量として出力する次元選定部と、を備える局所特徴量抽出装置。
(付記2)付記1に記載の局所特徴量抽出装置であって、前記特徴点選定部において選定される特徴点数、および、前記次元選定部において選定される次元数を決定する選定数決定部をさらに備える、局所特徴量抽出装置。
(付記3)付記2に記載の局所特徴量抽出装置であって、前記選定数決定部は、前記選定された特徴点の特徴量の合計サイズを指定するための情報である指定特徴量サイズ情報を受け付け、該指定特徴量サイズ情報に基づいて前記特徴点数及び前記次元数を決定する、局所特徴量抽出装置。
(付記4)付記3に記載の局所特徴量抽出装置であって、前記選定数決定部は、前記合計サイズ及び前記次元数の対応関係を示す情報と、前記指定特徴量サイズ情報とに基づいて、前記特徴点数及び前記次元数を決定する、局所特徴量抽出装置。
(付記5)付記3に記載の局所特徴量抽出装置であって、前記選定数決定部は、前記合計サイズ及び前記特徴点数の対応関係を示す情報と、前記指定特徴量サイズ情報とに基づいて、前記特徴点数及び前記次元数を決定する、局所特徴量抽出装置。
(付記6)付記3に記載の局所特徴量抽出装置であって、前記選定数決定部は、前記合計サイズ、前記特徴点数、及び前記次元数の対応関係を示す情報と、前記指定特徴量サイズ情報とに基づいて、前記特徴点数及び前記次元数を決定する、局所特徴量抽出装置。
(付記7)付記1に記載の局所特徴量抽出装置であって、前記特徴点選定部における特徴点の選定結果を示す選定結果情報該を受け付け、該選定結果情報に基づいて前記次元数を決定する選定数決定部をさらに備える、局所特徴量抽出装置。
(付記8)付記7に記載の局所特徴量抽出装置であって、前記選定結果情報には、選定された特徴点ごとの重要度を示す重要度情報が含まれ、前記選定数決定部は、前記重要度情報に基づいて、選定された特徴点ごとに前記次元数を決定する、局所特徴量抽出装置。
(付記9)付記7または8に記載の局所特徴量抽出装置であって、前記選定数決定部は、前記選定された特徴点の特徴量の合計サイズを指定するための情報である指定特徴量サイズ情報をさらに受け付け、前記選定結果情報及び前記指定特徴量サイズ情報に基づいて前記次元数を決定する、局所特徴量抽出装置。
(付記10)付記1~9の何れか一項に記載の局所特徴量抽出装置であって、前記次元選定部は、隣接するサブ領域間において、選定される次元が少なくとも1つは異なるように前記特徴ベクトルから次元を選定する、局所特徴量抽出装置。
(付記11)付記1~10の何れか一項に記載の局所特徴量抽出装置であって、前記次元選定部は、局所領域内の複数のサブ領域の特徴ベクトルにおいて次元を選定するための選定順位に従って、前記特徴ベクトルから次元を選定する、局所特徴量抽出装置。
(付記12)付記11に記載の局所特徴量抽出装置であって、前記次元選定部は、前記選定順位に従って選定された次元の順に、選定された次元の要素を局所領域の特徴量として出力する、局所特徴量抽出装置。
(付記13)付記1~12の何れか一項に記載の局所特徴量抽出装置であって、前記特徴点情報は、各特徴点のスケールを示すスケール情報を含み、前記特徴点選定部は、前記スケール情報に基づいて、検出された前記複数の特徴点の中から、スケールに応じた重要度順に前記所定数の特徴点を選定する、局所特徴量抽出装置。
(付記14)付記1~12の何れか一項に記載の局所特徴量抽出装置であって、前記特徴点選定部は、前記特徴点情報に基づいて、検出された前記複数の特徴点を複数のグループに分類する特徴点分類部と、各グループから少なくとも1つの特徴点を選定することにより、前記所定数の特徴点を選定する代表特徴点選定部と、を備える局所特徴量抽出装置。
(付記15)コンピュータが、画像内の複数の特徴点を検出し、各特徴点に関する情報である特徴点情報を出力し、前記特徴点情報に基づいて、検出された前記複数の特徴点の中から、重要度順に所定数の特徴点を選定し、選定された各特徴点に対する局所領域を取得し、各局所領域を複数のサブ領域に分割し、各局所領域内のサブ領域ごとに複数次元の特徴ベクトルを生成し、各局所領域内のサブ領域の位置関係に基づいて、近接するサブ領域間の相関が低くなるようにサブ領域ごとに前記特徴ベクトルから次元を選定し、選定された次元の要素を局所領域の特徴量として出力する、局所特徴量抽出方法。
(付記16)コンピュータに、画像内の複数の特徴点を検出し、各特徴点に関する情報である特徴点情報を出力する機能と、前記特徴点情報に基づいて、検出された前記複数の特徴点の中から、重要度順に所定数の特徴点を選定する機能と、選定された各特徴点に対する局所領域を取得する機能と、各局所領域を複数のサブ領域に分割する機能と、各局所領域内のサブ領域ごとに複数次元の特徴ベクトルを生成する機能と、各局所領域内のサブ領域の位置関係に基づいて、近接するサブ領域間の相関が低くなるようにサブ領域ごとに前記特徴ベクトルから次元を選定し、選定された次元の要素を局所領域の特徴量として出力する機能と、を実現させるためのプログラム。
10 特徴点検出部
12 特徴点選定部
14 局所特徴量生成部
40 局所領域取得部
42 サブ領域分割部
44 サブ領域特徴ベクトル生成部
46 次元選定部
Claims (10)
- 画像内の複数の特徴点を検出し、各特徴点に関する情報である特徴点情報を出力する特徴点検出部と、
前記特徴点情報に基づいて、検出された前記複数の特徴点の中から、重要度順に所定数の特徴点を選定する特徴点選定部と、
選定された各特徴点に対する局所領域を取得する局所領域取得部と、
各局所領域を複数のサブ領域に分割するサブ領域分割部と、
各局所領域内のサブ領域ごとに複数次元の特徴ベクトルを生成するサブ領域特徴ベクトル生成部と、
各局所領域内のサブ領域の位置関係に基づいて、近接するサブ領域間の相関が低くなるようにサブ領域ごとに前記特徴ベクトルから次元を選定し、選定された次元の要素を局所領域の特徴量として出力する次元選定部と、
を備える局所特徴量抽出装置。 - 請求項1に記載の局所特徴量抽出装置であって、
前記特徴点選定部において選定される特徴点数、および、前記次元選定部において選定される次元数を決定する選定数決定部をさらに備える、
局所特徴量抽出装置。 - 請求項2に記載の局所特徴量抽出装置であって、
前記選定数決定部は、前記選定された特徴点の特徴量の合計サイズを指定するための情報である指定特徴量サイズ情報を受け付け、該指定特徴量サイズ情報に基づいて前記特徴点数及び前記次元数を決定する、
局所特徴量抽出装置。 - 請求項3に記載の局所特徴量抽出装置であって、
前記選定数決定部は、前記合計サイズ及び前記次元数の対応関係を示す情報と、前記指定特徴量サイズ情報とに基づいて、前記特徴点数及び前記次元数を決定する、
局所特徴量抽出装置。 - 請求項3に記載の局所特徴量抽出装置であって、
前記選定数決定部は、前記合計サイズ及び前記特徴点数の対応関係を示す情報と、前記指定特徴量サイズ情報とに基づいて、前記特徴点数及び前記次元数を決定する、
局所特徴量抽出装置。 - 請求項3に記載の局所特徴量抽出装置であって、
前記選定数決定部は、前記合計サイズ、前記特徴点数、及び前記次元数の対応関係を示す情報と、前記指定特徴量サイズ情報とに基づいて、前記特徴点数及び前記次元数を決定する、
局所特徴量抽出装置。 - 請求項1に記載の局所特徴量抽出装置であって、
前記特徴点選定部における特徴点の選定結果を示す選定結果情報該を受け付け、該選定結果情報に基づいて前記次元数を決定する選定数決定部をさらに備える、
局所特徴量抽出装置。 - 請求項7に記載の局所特徴量抽出装置であって、
前記選定結果情報には、選定された特徴点ごとの重要度を示す重要度情報が含まれ、
前記選定数決定部は、前記重要度情報に基づいて、選定された特徴点ごとに前記次元数を決定する、
局所特徴量抽出装置。 - コンピュータが、
画像内の複数の特徴点を検出し、各特徴点に関する情報である特徴点情報を出力し、
前記特徴点情報に基づいて、検出された前記複数の特徴点の中から、重要度順に所定数の特徴点を選定し、
選定された各特徴点に対する局所領域を取得し、
各局所領域を複数のサブ領域に分割し、
各局所領域内のサブ領域ごとに複数次元の特徴ベクトルを生成し、
各局所領域内のサブ領域の位置関係に基づいて、近接するサブ領域間の相関が低くなるようにサブ領域ごとに前記特徴ベクトルから次元を選定し、選定された次元の要素を局所領域の特徴量として出力する、
局所特徴量抽出方法。 - コンピュータに、
画像内の複数の特徴点を検出し、各特徴点に関する情報である特徴点情報を出力する機能と、
前記特徴点情報に基づいて、検出された前記複数の特徴点の中から、重要度順に所定数の特徴点を選定する機能と、
選定された各特徴点に対する局所領域を取得する機能と、
各局所領域を複数のサブ領域に分割する機能と、
各局所領域内のサブ領域ごとに複数次元の特徴ベクトルを生成する機能と、
各局所領域内のサブ領域の位置関係に基づいて、近接するサブ領域間の相関が低くなるようにサブ領域ごとに前記特徴ベクトルから次元を選定し、選定された次元の要素を局所領域の特徴量として出力する機能と、
を実現させるためのプログラム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/359,063 US9697435B2 (en) | 2011-11-18 | 2012-11-15 | Local feature descriptor extracting apparatus, local feature descriptor extracting method, and program |
JP2013544319A JP6044548B2 (ja) | 2011-11-18 | 2012-11-15 | 局所特徴量抽出装置、局所特徴量抽出方法、及びプログラム |
EP12849587.6A EP2782067B1 (en) | 2011-11-18 | 2012-11-15 | Local feature amount extraction device, local feature amount extraction method, and program |
KR1020147016332A KR101612212B1 (ko) | 2011-11-18 | 2012-11-15 | 국소 특징 기술자 추출 장치, 국소 특징 기술자 추출 방법, 및 프로그램을 기록한 컴퓨터 판독가능 기록 매체 |
CN201280056804.2A CN103946891B (zh) | 2011-11-18 | 2012-11-15 | 局部特征量提取装置和局部特征量提取方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-253223 | 2011-11-18 | ||
JP2011253223 | 2011-11-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013073622A1 true WO2013073622A1 (ja) | 2013-05-23 |
Family
ID=48429676
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/079673 WO2013073622A1 (ja) | 2011-11-18 | 2012-11-15 | 局所特徴量抽出装置、局所特徴量抽出方法、及びプログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US9697435B2 (ja) |
EP (1) | EP2782067B1 (ja) |
JP (1) | JP6044548B2 (ja) |
KR (1) | KR101612212B1 (ja) |
CN (1) | CN103946891B (ja) |
WO (1) | WO2013073622A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101533925B1 (ko) * | 2014-05-20 | 2015-07-03 | 한양대학교 에리카산학협력단 | 적외선 영상에서 소형 표적 검출 방법 및 그 장치 |
JPWO2015099016A1 (ja) * | 2013-12-26 | 2017-03-23 | 日本電気株式会社 | 画像処理装置、被写体識別方法及びプログラム |
CN111127385A (zh) * | 2019-06-06 | 2020-05-08 | 昆明理工大学 | 基于生成式对抗网络的医学信息跨模态哈希编码学习方法 |
CN113127663A (zh) * | 2021-04-01 | 2021-07-16 | 深圳力维智联技术有限公司 | 目标图像搜索方法、装置、设备及计算机可读存储介质 |
CN117058723A (zh) * | 2023-10-11 | 2023-11-14 | 腾讯科技(深圳)有限公司 | 掌纹识别方法、装置及存储介质 |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6168303B2 (ja) * | 2012-01-30 | 2017-07-26 | 日本電気株式会社 | 情報処理システム、情報処理方法、情報処理装置およびその制御方法と制御プログラム、通信端末およびその制御方法と制御プログラム |
JP6225460B2 (ja) * | 2013-04-08 | 2017-11-08 | オムロン株式会社 | 画像処理装置、画像処理方法、制御プログラムおよび記録媒体 |
FR3009635B1 (fr) * | 2013-08-08 | 2016-08-19 | St Microelectronics Sa | Procede de recherche d'une image similaire dans une banque d'images a partir d'une image de reference |
CN105989367B (zh) * | 2015-02-04 | 2019-06-28 | 阿里巴巴集团控股有限公司 | 目标获取方法及设备 |
EP3113077B1 (en) * | 2015-06-30 | 2018-08-29 | Lingaro Sp. z o.o. | Method for image feature point description |
CN105243661A (zh) * | 2015-09-21 | 2016-01-13 | 成都融创智谷科技有限公司 | 一种基于susan算子的角点检测方法 |
US10339411B1 (en) | 2015-09-28 | 2019-07-02 | Amazon Technologies, Inc. | System to represent three-dimensional objects |
US9830528B2 (en) | 2015-12-09 | 2017-11-28 | Axis Ab | Rotation invariant object feature recognition |
CN110895699B (zh) * | 2018-09-12 | 2022-09-13 | 北京字节跳动网络技术有限公司 | 用于处理图像的特征点的方法和装置 |
CN111325215B (zh) * | 2018-12-14 | 2024-03-19 | 中国移动通信集团安徽有限公司 | 图像局部特征描述方法、装置、设备及介质 |
CN110889904B (zh) * | 2019-11-20 | 2024-01-19 | 广州智能装备研究院有限公司 | 一种图像特征精简方法 |
CN111160363B (zh) * | 2019-12-02 | 2024-04-02 | 深圳市优必选科技股份有限公司 | 特征描述子生成方法、装置、可读存储介质及终端设备 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6711293B1 (en) | 1999-03-08 | 2004-03-23 | The University Of British Columbia | Method and apparatus for identifying scale invariant features in an image and use of same for locating an object in an image |
JP2010079545A (ja) | 2008-09-25 | 2010-04-08 | Canon Inc | 画像処理装置、画像処理方法およびプログラム |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8175392B2 (en) | 2009-01-29 | 2012-05-08 | Nec Corporation | Time segment representative feature vector generation device |
US20100246969A1 (en) | 2009-03-25 | 2010-09-30 | Microsoft Corporation | Computationally efficient local image descriptors |
CN101714254A (zh) * | 2009-11-16 | 2010-05-26 | 哈尔滨工业大学 | 联合多尺度sift和区域不变矩特征的配准控制点提取方法 |
CN101859326B (zh) | 2010-06-09 | 2012-04-18 | 南京大学 | 一种图像检索方法 |
CN102004916B (zh) | 2010-11-15 | 2013-04-24 | 无锡中星微电子有限公司 | 图像特征提取系统及其方法 |
CN102169581A (zh) | 2011-04-18 | 2011-08-31 | 北京航空航天大学 | 一种基于特征向量的快速高精度鲁棒性匹配方法 |
KR101611778B1 (ko) | 2011-11-18 | 2016-04-15 | 닛본 덴끼 가부시끼가이샤 | 국소 특징 기술자 추출 장치, 국소 특징 기술자 추출 방법, 및 프로그램을 기록한 컴퓨터 판독가능 기록 매체 |
-
2012
- 2012-11-15 WO PCT/JP2012/079673 patent/WO2013073622A1/ja active Application Filing
- 2012-11-15 JP JP2013544319A patent/JP6044548B2/ja active Active
- 2012-11-15 CN CN201280056804.2A patent/CN103946891B/zh active Active
- 2012-11-15 KR KR1020147016332A patent/KR101612212B1/ko active IP Right Grant
- 2012-11-15 EP EP12849587.6A patent/EP2782067B1/en active Active
- 2012-11-15 US US14/359,063 patent/US9697435B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6711293B1 (en) | 1999-03-08 | 2004-03-23 | The University Of British Columbia | Method and apparatus for identifying scale invariant features in an image and use of same for locating an object in an image |
JP2010079545A (ja) | 2008-09-25 | 2010-04-08 | Canon Inc | 画像処理装置、画像処理方法およびプログラム |
Non-Patent Citations (6)
Title |
---|
DAVID G. LOWE: "Distinctive image features from scale-invariant keypoints", USA, INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 60, no. 2, 2004, pages 91 - 110, XP002756976, DOI: doi:10.1023/B:VISI.0000029664.99615.94 |
HIRONOBU FUJIYOSHI: "Gradient-Based Feature Extraction : SIFT and HOG", IPSJ SIG NOTES, vol. 2007, no. 87, 3 September 2007 (2007-09-03), pages 211 - 224, XP055150076 * |
KOICHI KISE: "Specific Object Recognition by Image Matching Using Local Features", JOURNAL OF JAPANESE SOCIETY FOR ARTIFICIAL INTELLIGENCE, vol. 25, no. 6, 1 November 2010 (2010-11-01), pages 769 - 776, XP008173967 * |
See also references of EP2782067A4 |
TAKAYUKI HONDO: "Inspection of Memory Reduction Methods for Specific Object Recognition", IPSJ SIG NOTES, vol. 2009, no. 29, 6 March 2009 (2009-03-06), pages 171 - 176, XP008173892 * |
TAKUMI KOBAYASHI: "Higher-order Local Auto- correlation Based Image Features and Their Applications", THE JOURNAL OF THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS, vol. 94, no. 4, 1 April 2011 (2011-04-01), pages 335 - 340, XP008173964 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2015099016A1 (ja) * | 2013-12-26 | 2017-03-23 | 日本電気株式会社 | 画像処理装置、被写体識別方法及びプログラム |
KR101533925B1 (ko) * | 2014-05-20 | 2015-07-03 | 한양대학교 에리카산학협력단 | 적외선 영상에서 소형 표적 검출 방법 및 그 장치 |
CN111127385A (zh) * | 2019-06-06 | 2020-05-08 | 昆明理工大学 | 基于生成式对抗网络的医学信息跨模态哈希编码学习方法 |
CN113127663A (zh) * | 2021-04-01 | 2021-07-16 | 深圳力维智联技术有限公司 | 目标图像搜索方法、装置、设备及计算机可读存储介质 |
CN113127663B (zh) * | 2021-04-01 | 2024-02-27 | 深圳力维智联技术有限公司 | 目标图像搜索方法、装置、设备及计算机可读存储介质 |
CN117058723A (zh) * | 2023-10-11 | 2023-11-14 | 腾讯科技(深圳)有限公司 | 掌纹识别方法、装置及存储介质 |
CN117058723B (zh) * | 2023-10-11 | 2024-01-19 | 腾讯科技(深圳)有限公司 | 掌纹识别方法、装置及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN103946891A (zh) | 2014-07-23 |
US20140328543A1 (en) | 2014-11-06 |
US9697435B2 (en) | 2017-07-04 |
EP2782067A1 (en) | 2014-09-24 |
JP6044548B2 (ja) | 2016-12-14 |
KR20140091067A (ko) | 2014-07-18 |
EP2782067A4 (en) | 2016-08-17 |
CN103946891B (zh) | 2017-02-22 |
KR101612212B1 (ko) | 2016-04-15 |
EP2782067B1 (en) | 2019-09-18 |
JPWO2013073622A1 (ja) | 2015-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6044548B2 (ja) | 局所特徴量抽出装置、局所特徴量抽出方法、及びプログラム | |
JP6103243B2 (ja) | 局所特徴量抽出装置、局所特徴量抽出方法、及びプログラム | |
JP6044547B2 (ja) | 局所特徴量抽出装置、局所特徴量抽出方法、及びプログラム | |
JP5527555B2 (ja) | 画像データベースの作成方法、作成プログラム及び画像検索方法 | |
JP5818327B2 (ja) | 三次元物体認識用画像データベースの作成方法および作成装置 | |
US9064171B2 (en) | Detection device and method for transition area in space | |
KR101191223B1 (ko) | 이미지 검색 방법, 장치, 및 이 방법을 실행하기 위한 컴퓨터 판독 가능한 기록 매체 | |
US10026197B2 (en) | Signal processing method, signal processing apparatus, and storage medium | |
US9792528B2 (en) | Information processing system, information processing method, information processing apparatus and control method and control program thereof, and communication terminal and control method and control program thereof | |
JP6714669B2 (ja) | 勾配ヒストグラムに基づいて画像記述子を変換する方法および関連する画像処理装置 | |
WO2011044058A2 (en) | Detecting near duplicate images | |
US20130223749A1 (en) | Image recognition apparatus and method using scalable compact local descriptor | |
US8755605B2 (en) | System and method for compact descriptor for visual search | |
Marouane et al. | Visual positioning systems—An extension to MoVIPS | |
US10650242B2 (en) | Information processing apparatus, method, and storage medium storing a program that obtain a feature amount from a frame in accordance with a specified priority order | |
JP6109118B2 (ja) | 画像処理装置および方法、情報処理装置および方法、並びにプログラム | |
JP2015032248A (ja) | 画像検索装置、データ検索システム及びプログラム | |
Amato et al. | Indexing vectors of locally aggregated descriptors using inverted files | |
KR20210133038A (ko) | 변환 이미지를 이용한 영상 인식 시스템 및 그 제공방법 | |
JP6283308B2 (ja) | 画像辞書構成方法、画像表現方法、装置、及びプログラム | |
Liang et al. | Learning vocabulary-based hashing with adaboost | |
Amato et al. | Aggregating Local Descriptors for Epigraphs Recognition | |
KR20170099633A (ko) | 영상 검색에서 특징점의 서술자의 데이터 량을 줄이는 서술자 부호화 장치 및 서술자 부호화 방법 | |
JP2012203752A (ja) | 類似画像検索装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12849587 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013544319 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14359063 Country of ref document: US Ref document number: 2012849587 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20147016332 Country of ref document: KR Kind code of ref document: A |