WO2010087124A1 - 特徴量選択装置 - Google Patents
特徴量選択装置 Download PDFInfo
- Publication number
- WO2010087124A1 WO2010087124A1 PCT/JP2010/000246 JP2010000246W WO2010087124A1 WO 2010087124 A1 WO2010087124 A1 WO 2010087124A1 JP 2010000246 W JP2010000246 W JP 2010000246W WO 2010087124 A1 WO2010087124 A1 WO 2010087124A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature
- image
- types
- feature quantity
- images
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
- G06F18/2113—Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
Definitions
- the present invention relates to an apparatus for selecting a feature quantity, and more particularly to an apparatus for selecting a plurality of feature quantities suitable for an image identifier for identifying an image (determining image identity) from many types of feature quantities. .
- the image identifier is an image feature amount for identifying an image (determining identity). An image identifier extracted from one image is compared with an image identifier extracted from another image, and based on the comparison result, an identity scale (generally, similarity or distance indicating the degree to which the two images are identical) Can be calculated. Further, it is possible to determine whether or not two images are the same by comparing the calculated identity scale with a certain threshold value.
- the two images are the same is not limited to the case where the two images are the same at the level of the image signal (the pixel values of the pixels constituting the image), but the conversion of the compression format (format) of the image, Image size / aspect ratio conversion, image tone adjustment, various image filtering (sharpening, smoothing, etc.), local processing of images (telop overlay, clipping, etc.), image recapturing
- image recapturing The case where one image is a duplicated image of the other image by various modification processes such as. If an image identifier is used, for example, a copy of a moving image that is an image or a collection of images can be detected. Therefore, the image identifier can be applied to an illegal copy detection system for images or moving images.
- An image identifier generally consists of a set of multiple feature values. If each feature amount included in the set is one dimension, the image identifier is a multi-dimensional feature vector. In particular, a quantization index (quantized value) that is a discrete value is often used as the feature amount. Examples of image identifiers are described in Non-Patent Document 1, Non-Patent Document 2, and Patent Document 1. In the methods described in these documents, feature amounts are extracted for each of multiple local regions of an image, the extracted feature amounts are quantized to calculate a quantization index, and the calculated quantization index for each local region is quantized. An image identifier is used as an index vector.
- Non-Patent Document 1 and Non-Patent Document 2 an image is divided into blocks, and feature quantities (quantization indexes) are extracted using each block as a local region.
- feature quantities quantization indexes
- a luminance index pattern in a block classified into 11 types is used as a quantization index.
- Non-Patent Document 2 (a technique described as “Local Edge Representation” in Non-Patent Document 2), a quantized index is obtained by quantizing the centroid position of the edge point extracted from the block.
- a feature quantity (a feature quantity that increases the accuracy of image identity determination) that is suitable for an image identifier composed of a set of a plurality of feature quantities (optimizing performance).
- Non-Patent Document 1 The image identifiers described in Non-Patent Document 1, Non-Patent Document 2, and Patent Document 1 are extracted from local regions in which each feature amount is determined for each feature amount (different from each other). Therefore, in the examples of these documents, the selection of what local area the feature quantity is extracted from (which local area is determined as each feature quantity) affects the performance of the image identifier. It will be.
- the decision (selection) of the feature quantities is generally performed through empirical knowledge and trial and error experiments.
- the local region of each feature amount is a block obtained by regularly dividing an image.
- a technique for automatically selecting feature quantities so as to optimize performance is used in the field of pattern recognition.
- PCA Principal Component Analysis
- LDA Linear Discriminant Analysis
- the feature quantity is considered in consideration of both the identification ability that is the degree to which different images can be distinguished and the robustness that is the degree that the value of the feature quantity does not change due to various modifications to the image. Is not selected, the performance of the image identifier cannot be optimized (the image identity determination accuracy cannot be optimized).
- the principal component analysis (PCA) method maximizes the information of the entire feature quantity distribution, so it does not consider the robustness of the feature quantity (feature quantity selection considering the robustness of the feature quantity Can not).
- the linear discriminant analysis (LDA) method is suitable for feature selection of class classification (problem to classify into a finite number of classes), but is suitable for feature selection of identifiers for which no class is defined in advance (image). No (not feature selection in consideration of discriminating ability and robustness in image identifiers).
- an object of the present invention is to provide a feature quantity selection device that solves the problem that it is difficult to optimize the performance of image identifiers (image identity determination accuracy).
- a feature amount selection device extracts M types of feature amounts from each of a plurality of original images and a plurality of modified images obtained by performing modification processing on the plurality of original images.
- Feature quantity extraction means and the ability to identify different images by treating the original image, its modified image, and modified images of the same original image as the same image, and other images as different images And the robustness that is the degree to which the value of the feature value does not change due to the image modification process, the M feature values extracted from each of the images are evaluated, and the M feature values are evaluated.
- a feature quantity selecting means for selecting a set of N kinds of feature quantities smaller than M as feature quantities for identifying an image.
- the present invention is configured as described above, it is possible to optimize performance of an image identifier (accuracy determination accuracy of an image) composed of a set of a plurality of feature amounts for identifying an image.
- FIG. 1 It is a block diagram of a 1st embodiment of the present invention. It is a figure which shows the extraction method of a polymorphic area comparison feature-value. It is a figure which shows an example of the data memorize
- the feature amount extraction apparatus selects N types (N ⁇ M) of feature amounts less than M suitable as image identifiers from M types of feature amounts using an image group included in an image database. Then, information indicating the selected N types of feature quantities is output.
- a feature quantity suitable as an image identifier refers to a feature quantity that increases the accuracy of image identity determination.
- a set of N types of feature amounts selected by the feature amount selection apparatus according to the present invention is used as a feature amount of each dimension in an N-dimensional feature vector (image identifier).
- N-dimensional feature vectors image identifiers
- a method of calculating an identity scale a method of calculating based on comparison of values of identical (corresponding dimensions) feature values (for example, feature values) It is assumed that the number of dimensions with the same value (quantized index) is calculated as the similarity, and the Hamming distance, the Euclidean distance, the cosine similarity (inner product), etc. are calculated).
- M and N do not have to be predetermined numerical values (constants), and may be variables whose values change as long as M> 2 and N ⁇ M.
- a feature quantity extraction apparatus includes an image modification means 11, a feature quantity extraction parameter generation means 12, a feature quantity extraction means 13, a feature quantity selection means 14, an original image.
- the storage unit 21, the modified image storage unit 22, the feature amount extraction parameter storage unit 23, and the feature amount storage unit 24 are configured.
- the original image storage means 21 is an image database that stores a large number of original images in association with image IDs such as numbers for uniquely identifying the original images.
- the original image stored in the original image storage unit 21 is used by the feature quantity selection device according to the present embodiment to select a feature quantity suitable for the image identifier.
- the original image stored in the original image storage unit 21 is supplied to the image modification unit 11 and the feature amount extraction unit 13.
- the original image group stored in the original image storage means 21 is used to select a feature quantity suitable for the image identifier, it is desirable to include more original images (for example, 10,000 or more).
- the original image group stored in the original image storage unit 21 is used to select a feature quantity suitable for the image identifier, an image composed of the feature quantity selected by the feature quantity selection device of the present embodiment. It is desirable that the image group has the same tendency as the target image group using the identifier (an image group having a similar tendency). For example, in the case where the object to use the image identifier is an image or moving image on the Internet (for example, illegal use of the image or moving image on the Internet is used to detect duplication), it is stored in the original image storage means 21.
- the original image group is preferably an image group obtained by uniformly sampling all images on the Internet.
- the original image group stored in the original image storage unit 21 is desirably an image group obtained by sampling various landscape images.
- the original image group stored in the original image storage unit 21 is an image group obtained by sampling various picture images.
- the target image group using the image identifier is a mixture of various types of images
- the original image group stored in the original image storage means 21 is the target image. It is desirable that they are mixed in the same proportion as the group.
- the original image group included in the original image storage unit 21 is an image group having the same tendency as the image group to be an object using the image identifier (an image group having a similar tendency). Since a more appropriate feature amount can be selected as an image identifier for identifying an image for an image group, an image identifier with higher accuracy of image identity determination can be configured.
- the image modification unit 11 performs a modification process on the original image supplied from the original image storage unit 21 to generate a modified image.
- the generated modified image is stored in the modified image storage unit 22 in association with the original image of the generation source so that it is clear from which original image the modified image is generated.
- a method for associating the original image with the modified image is arbitrary. For example, a method in which a value obtained by concatenating a branch number for uniquely identifying a plurality of modified images generated from the original image to the image ID assigned to the original image is given to the modified image as an image ID of the modified image. Good.
- modification processing includes the following processing. However, these processes are merely examples, and are not limited to these. Further, the image modifying unit 11 may perform a combination of these (for example, (A) + (D)).
- A Image compression format conversion
- B Image size / aspect ratio conversion
- C Image color adjustment / monochrome
- D Various filter processing (sharpening, smoothing, etc.)
- E Local processing on images (telop overlay, clipping, etc.)
- G Black band addition to the image (black band is inserted at the top, bottom, left or right of the screen by aspect conversion of 4: 3 and 16: 9, for example. The black margin area).
- H Image recapture
- the image modification unit 11 may generate a plurality of types of modified images by performing a plurality of types of modification processing on each of the original images stored in the original image storage unit 21 (that is, the number of modified images). May be greater than the number of original images).
- the modification process performed by the image modification unit 11 requires the robustness of the modification process performed on the image using the image identifier composed of the feature quantity selected by the feature quantity selection apparatus of the present embodiment or the target using the image identifier. It is desirable that the modification process be the same (the same tendency). For example, in a system that uses an image identifier, robustness to the above (A), (B), (C), and (D) is required (or (A), (B), ( In the case where the modification process (C) and (D) are performed), the image modification unit 11 desirably performs the modification processes (A), (B), (C), and (D).
- the image modification unit 11 can perform each type of modification processing at the same rate as that performed on the target. desirable.
- the modification process performed by the image modification unit 11 is the same as the modification process performed on the object using the image identifier or the modification process required to be robust (the tendency is similar).
- a more robust feature amount can be selected as an image identifier for identifying an image, an image identifier with higher accuracy of image identity determination can be configured.
- Feature amount extraction parameter generation means 12 generates a feature amount extraction parameter that is a parameter for extracting a feature amount from an image for each of the M types of feature amounts.
- the generated M types of feature quantity extraction parameters are stored in the feature quantity extraction parameter storage unit 23.
- the feature amount extraction parameter generation unit 12 for generating M types of feature amount extraction parameters is provided.
- an embodiment in which the feature amount extraction parameter generation unit 12 is omitted is also conceivable. .
- M types of feature quantity extraction parameters generated by the same or similar means as the feature quantity extraction parameter generation unit 12 or created manually are feature quantities. It is stored in advance in the extraction parameter storage means 23.
- M may be any number (N ⁇ M) larger than the number N of feature amounts to be selected as the feature amount of the image identifier, but is preferably several times to several tens of times N.
- N is an appropriate value based on requirements such as image identifier identity determination accuracy, image identifier size, and collation speed
- M is preferably, for example, about 2000 to 5000 or more.
- the M types of feature quantities extracted by the M types of feature quantity extraction parameters may be of any type. However, it is desirable that the feature quantity is improved so as to be effective for more types of images. One example will be described with reference to FIG.
- FIG. 2 is a diagram showing a method of extracting an example of feature quantities improved so as to be effective for more types of images (hereinafter referred to as multi-shaped region comparison feature quantities).
- the multi-shaped region comparison feature amount is determined in advance for each dimension of the feature vector in two extraction regions (first extraction region and second extraction region) in the image for extracting the feature amount. Yes.
- the feature that the shape of the extraction region is diverse is a major difference between the multi-shaped region comparison feature value and the feature value described in Patent Document 1.
- the average luminance value of the first extraction region and the second extraction region determined for each dimension is calculated for each dimension, and the average luminance value of the first extraction region is calculated.
- the average luminance value of the second extraction region are compared (that is, based on the difference value) and quantized into three values (+1, 0, ⁇ 1) to obtain a quantization index.
- the absolute value of the difference value between the average luminance value of the first extraction region and the average luminance value of the second extraction region is equal to or less than a predetermined threshold value, the average luminance of the first extraction region and the second extraction region It is assumed that there is no difference in the values, and the quantization index is 0 indicating that there is no difference.
- the magnitude of the average luminance value of the first extraction region and the average luminance value of the second extraction region is set to In comparison, if the average luminance value of the first extraction area is larger, the quantization index is +1, and otherwise the quantization index is -1.
- the quantization index Qn of dimension n is given by Can be calculated.
- the feature quantity extraction parameter corresponding to the feature quantity is information indicating the first extraction area and the second extraction area of each feature quantity. For example, a set of pixel coordinate values of the first extraction region and a set of pixel coordinate values of the second extraction region in a certain normalized image size (for example, 320 ⁇ 240 pixels) are used as feature amount extraction parameters. Also good. Further, the extraction area may be expressed as fewer parameters. For example, if the shape of the extraction region is a rectangle, the coordinate values of the four corners of the rectangle may be used as the feature amount extraction parameter. For example, if the shape of the extraction region is a circle, the coordinate value and the radius value of the center of the circle May be used as a feature amount extraction parameter. In addition, when the threshold value th differs for each type of feature value, the threshold value th may also be included in the feature value extraction parameter.
- the feature quantity extraction parameter generation means 12 may automatically generate feature quantity extraction parameters for M types of multi-shaped region comparison feature quantities, for example, using pseudo-random numbers. For example, a random number sequence may be generated from a seed of a pseudo random number, and the shape of the extraction region and the threshold th may be automatically generated based on the generated random number. For example, when the shape of the extraction region is a quadrangle, the coordinate values of the four corners of the extraction region may be automatically generated based on the generated random number.
- the M types of feature quantities have the same characteristic as the average luminance value.
- the M types of feature quantities are not limited to feature quantities having the same characteristics.
- feature quantities having different characteristics such as color information, frequency information, and edge information may be mixed as M types of feature quantities. That is, for example, some of M types of feature quantities are multi-shaped region comparison feature quantities, some are feature quantities based on color information, some are feature quantities based on frequency information, and some are edge information.
- a feature amount extraction parameter may be generated as a feature amount based on the feature amount.
- the feature amount extraction unit 13 inputs the original image stored in the original image storage unit 21 and the modified image stored in the modified image storage unit 22, and receives M types of M stored in the feature amount extraction parameter storage unit 23. M types of feature quantities are extracted from each of the original image and the modified image according to a feature quantity extraction parameter that defines a feature quantity extraction method.
- the feature amount extraction unit 13 includes two extraction units: a feature amount extraction unit 131 that extracts M types of feature amounts from the original image, and a feature amount extraction unit 132 that extracts M types of feature amounts from the modified image. Yes. These two feature quantity extraction units 131 and 132 may be operated simultaneously in parallel, or may be operated one by one in order.
- the M types of feature quantities extracted from the original image and the M types of feature quantities extracted from the modified image are stored in the feature quantity storage unit 24.
- the feature amount storage unit 24 stores the M types of feature amounts extracted from the original image and the M types of feature amounts extracted from the modified image of the original image in association with each other.
- the association method may be arbitrary. For example, if the image ID of the modified image is obtained by concatenating a branch number to the image ID of the original image, the image ID of the original image is assigned to a set of M types of feature values extracted from the original image. If an image ID of the modified image is assigned to a set of M types of feature values extracted from the modified image of the original image, the M types extracted from the original image can be identified by identifying the image ID. And the correspondence between the M types of feature amounts extracted from the modified image of the original image can be recognized.
- the feature quantity storage unit 24 of this example stores feature quantity groups 24-1 to 24-x corresponding to the original image stored in the original image storage unit 21 on a one-to-one basis.
- One feature quantity group for example, the feature quantity group 24-1, includes original image feature quantity data 241 composed of an image ID of a corresponding original image and M types of feature quantities extracted from the original image, and its original elements. It includes modified image feature data 241-1 to 241-y composed of the image ID of the modified image of the image and M kinds of feature values extracted from the modified image.
- images belonging to the same feature quantity group that is, one original image and its modified image
- images belonging to different feature quantity groups are treated as different images.
- the feature quantity selection unit 14 uses the M types of feature quantity values extracted from the original image stored in the feature quantity storage unit 24 and the M types of feature quantity values extracted from the modified image to generate a feature.
- the function of the feature quantity selection unit 14 will be described in detail.
- the value of features extracted from the original image group represents a random variable X n
- the value of features extracted from modified image group represents a random variable X 'n.
- the feature quantity can be regarded as a random variable that takes any value (quantization index) of ⁇ +1, 0, ⁇ 1 ⁇ .
- the discrimination ability of the set S N ⁇ X 1 , X 2 ,..., X N ⁇ is represented as D (S N ).
- D (S N ) indicates that the larger the value, the greater the discrimination capability.
- N ⁇ X ′ 1 , X ′ 2 ,..., X ′ N ⁇ .
- R (S N , S ′ N ) indicates that the greater the value, the greater the robustness.
- E (S N , S ′ N ) D (S N ) + R (S N , S ′ N ) ...
- the feature quantity selection unit 14 selects a set of N types of feature quantities so that the value of E (S N , S ′ N ) according to the evaluation formula 2 becomes large. For example, a set of N types of feature values that maximizes the value of E (S N , S ′ N ) may be selected (in a lump). Alternatively, a set of feature amounts may be selected by sequentially selecting (adding) feature amounts so that the value of E (S N , S ′ N ) increases.
- ⁇ n ⁇ + 1,0, ⁇ 1 ⁇ .
- the discrimination ability of a set of feature quantities can be considered to be, for example, that the discrimination ability increases as the information entropy of each feature quantity increases.
- the greater the information entropy the more likely the appearance probability of the value (probability variable X n ) of each feature value is, so the redundancy is reduced and the discrimination capability is increased.
- the appearance probability of the value (probability variable X n ) of each feature value is biased to a specific value, the redundancy increases and the information entropy decreases, so that the discrimination ability decreases.
- the discrimination ability D (S N ) of the feature quantity set can be calculated as the sum of the information entropy of each feature quantity.
- H (X n ) of the random variable X n of the feature quantity n can be calculated by the following equation.
- H (X n ) ⁇ ⁇ AA p (x n ) log p (x n )... [Formula 3]
- the subscript AA below ⁇ means x n ⁇ n .
- the probability that the feature value of the original image group is +1, 0, ⁇ 1 may be calculated from the appearance frequency.
- the discriminating ability D (S N ) possessed by the set of feature amounts can be calculated, for example, as the sum of the information entropy H (X n ) of each feature amount by the following equation.
- the discrimination ability of a set of feature quantities is, for example, that the smaller the correlation between the feature quantities and the higher the stochastic independence, the smaller the redundancy, and thus the greater the discrimination ability. . This is because even if feature quantities having a large correlation with each other are collected, the redundancy is increased and the identification ability for identifying an image does not increase.
- the mutual information amount can be used as a scale representing the degree of correlation between the feature amounts.
- a random variable X k of the feature amount k (X n; X k) can be calculated by the following equation.
- I (X n; X k) ⁇ BB ⁇ CC p (x n, x k) log [p (x n, x k) / ⁇ p (x n) p (x k) ⁇ ] ...
- the subscript BB below ⁇ means x n ⁇ n
- CC means x k ⁇ k .
- p (x n , x k ) can be calculated from the feature value of the supplied original image group.
- the sum of the mutual information amounts is negative (-is added) because the value of the discrimination ability D (S N ) is increased as the sum of the mutual information amounts is smaller.
- the discrimination ability D (S N ) possessed by the set of feature amounts may be calculated as the following equation by taking the sum of Equations 4 and 6.
- DD DD p
- Equation 8 the calculation of the joint entropy according to Equation 8 is not realistic when the number of N is large because the calculation amount increases in an exponential order as the number of feature amounts increases.
- the calculation method of the discrimination ability D (S N ) according to the expressions 4, 6, 7, and 8 described here is an example, and is not limited to these calculation methods.
- the discrimination ability may be calculated not from the feature value (probability variable X n ) of the original image group but from the feature value (probability variable X ′ n ) of the modified image group. Or you may calculate from what mixed them.
- the robustness R (S N , S ′ N ) of the set of feature amounts is, for example, the value (probability variable X n ) of the feature amount of the original image group of each feature amount and the modified image.
- corresponding feature quantity value of the group random variable X 'condition is calculated from n
- X 'n) is a random variable X' feature values of n random variables X n (the original image group that remains when learned (feature amount of the value of the modified image group) If the probability that the feature value does not change before and after modification is high (the probability that the feature value matches before and after modification is high), that is, if the robustness is large, conditional entropy H (X n
- X ′ n ) of the feature quantity n can be calculated by the following equation.
- X ' n ) - ⁇ EE ⁇ FF p (x n , x' n ) log p (x n
- EE subscript EE below ⁇ means x n ⁇ n
- FF means x ′ n ⁇ n .
- x ′ n ) is a conditional probability and can be calculated from the supplied feature value of the original image group and the corresponding feature value of the modified image group.
- the feature amount is the above-described multi-shaped region comparison feature amount
- X ' n + 1)
- X' n 0)
- X ' n -1)
- X' n + 1)
- X' n + 1)
- X ' n + 1)
- X ' n
- the robustness R (S N , S ′ N ) of the set of feature quantities is, for example, based on the sum of the conditional entropy H (X n
- X ′ n ) is negative (added to ⁇ ) because the smaller the sum of conditional entropy H (X n
- the sum E (S N , S ′ N ) of the discrimination ability and the robustness is, for example, any one of formula 4, formula 6, formula 7, and formula 8 as a calculation method of the discrimination ability D (S N ).
- the sum may be calculated by combining Equation 9 or Equation 11 as a method of calculating the robustness R (S N , S ′ N ).
- the sum of the discriminating ability D (S N ) and the robustness R (S N , S ′ N ) may be calculated using an appropriate weighting coefficient ⁇ as in the following equation.
- E (S N , S ′ N ) ⁇ D (S N ) + (1 ⁇ ) R (S N , S ′ N ) (Formula 12)
- the feature quantity selecting means 14 is configured so that the N types of feature quantities are increased so that the value of E (S N , S ′ N ) according to the evaluation formula 2 of the sum of discrimination ability and robustness as described above increases. Select a set. For example, a set of N types of feature quantities that maximizes E (S N , S ′ N ) may be selected at once. However, it is generally difficult to select a set of N types of feature values collectively so that E (S N , S ′ N ) becomes large. This is because E (S N , S ′ N ) needs to be calculated and evaluated for every combination of feature quantities, and the number of combinations becomes enormous.
- a feature quantity that maximizes the value of the evaluation formula 2 of the sum of discrimination ability and robustness is sequentially selected, and the feature quantity is added.
- the value of E (S N , S ′ N ) according to the evaluation formula 2 of the sum of discrimination ability and robustness is maximized
- Such feature quantity (random variable) X n is added. This is based on the value of the evaluation formula 2 of the sum of the discrimination ability and the robustness of the set of feature quantities before the addition of the feature quantity, and the evaluation formula 2 of the sum of the discrimination ability and the robustness of the set of the feature quantities after the feature quantity addition.
- the image modification unit 11 performs a predetermined type of modification process on each of the original images read from the original image storage unit 21, generates a modified image, and stores the modified image in the modified image storage unit 22. (S101).
- the feature amount extraction parameter generation unit 12 generates a feature amount extraction parameter that is a parameter for extracting the feature amount from the image for each of the M types of feature amounts, and stores the feature amount extraction parameter in the feature amount extraction parameter storage unit 23. (S102).
- the feature quantity extraction unit 131 of the feature quantity extraction unit 13 extracts M types of feature quantities from the respective original images in the original image storage unit 21 according to the extraction parameters of the M types of feature quantities, and the feature quantities.
- the data is stored in the storage unit 24 (S103).
- the feature amount extraction unit 132 of the feature amount extraction unit 13 extracts M types of feature amounts from the respective modified images in the modified image storage unit 22 according to the extraction parameters of the M types of feature amounts, and stores the feature amounts.
- the data is stored in the means 24 (S104).
- the feature quantity selection unit 14 inputs M types of feature quantities of the original image and the modified image stored in the feature quantity storage unit 24, and the original image, its modified image, and the modified images of the same original image are exchanged. Is the same image, the other images are treated as different images, and the recognition ability is the degree that can distinguish different images and the robustness that is the degree that the feature value does not change by the modification process to the image.
- N types of feature values are selected from the M types of feature values and output (S105). In the following, the details of step S105 will be described using the incremental method as an example.
- the feature quantity selection means 14 determines the type of one feature quantity to be added (S107). Specifically, the evaluation value of the sum of the discrimination ability and robustness of the set of feature quantities before the addition of the feature quantity, and the evaluation formula of the sum of the discrimination ability and robustness of the set of feature quantities after the addition of the feature quantity A type of feature quantity that maximizes the difference from the value (that is, the value of Expression 14) is selected, and the selected type of feature quantity is determined as one feature quantity to be added next.
- the feature quantity selection unit 14 determines whether or not N types of feature quantities have been determined (S108), and if not yet N types, the process returns to step S107 and continues to determine the remaining types of feature quantities. . On the other hand, if N types of feature values have been determined, the determined N types of feature values are output to storage means (not shown in FIG. 1) (S109).
- the characteristics are optimized so as to optimize the performance of the image identifier for identifying the image, which is composed of a set of a plurality of feature amounts (increasing the accuracy of determining the identity of the image).
- the amount can be selected.
- the reason for this is the ability to identify different images using the feature values extracted from the image groups before and after modification, and the robustness that the feature values do not change due to image modification processing. This is because the feature amount is selected so that the sum of and becomes larger.
- the feature amount extraction apparatus of the present invention can be realized by a computer and a program as well as by realizing the functions of the feature amount extraction apparatus as hardware.
- the program is provided by being recorded on a computer-readable recording medium such as a magnetic disk or a semiconductor memory, and is read by the computer when the computer is started up, and the computer is controlled by controlling the operation of the computer. It is made to function as the image modification means 11, the feature quantity extraction parameter generation means 12, the feature quantity extraction means 13 and the feature quantity selection means 14 in the form.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
Description
そこで本発明の目的は、画像識別子の性能(画像の同一性の判定精度)を最適化するのは困難である、という課題を解決する特徴量選択装置を提供することにある。
本実施の形態に係る特徴量抽出装置は、画像データベースに含まれる画像群を用いて、M種類の特徴量から、画像識別子として適したMより少ないN種類(N<M)の特徴量を選択し、選択したN種類の特徴量を示す情報を出力する。ここで「画像識別子として適した特徴量」とは、画像の同一性の判定精度が高くなるような特徴量のことをいう。本発明による特徴量選択装置で選択されたN種類の特徴量の集合は、N次元の特徴ベクトル(画像識別子)における各次元の特徴量として用いられる。N次元の特徴ベクトル(画像識別子)どうしの照合方法、すなわち同一性尺度を算出する方法としては、同一の(対応する次元の)特徴量の値の比較に基づいて算出する方法(例えば、特徴量の値(量子化インデックス)が一致する次元数を類似度として算出したり、ハミング距離、ユークリッド距離、コサイン類似度(内積)などを算出する)であることを想定している。また、MとNは、あらかじめ決定された数値(定数)である必要はなく、M>2かつN<Mを満たす正の整数であれば、その値が変化する変数であってもよい。
(B)画像のサイズ・アスペクト比の変換
(C)画像の色調の調整・モノクロ化
(D)画像への各種フィルタ処理(鮮鋭化、平滑化など)
(E)画像への局所的な加工(テロップ重畳、切抜きなど)
(F)画像の回転・平行移動・クロッピングなどの幾何変換
(G)画像への黒帯付加(黒帯とは、例えば4:3と16:9のアスペクト変換によって画面の上下や左右に挿入される黒い余白領域を指す)。
(H)画像の再キャプチャリング
0 (|Vn1-Vn2|≦th の場合)
-1 (|Vn1-Vn2|>th かつ Vn1≦Vn2 の場合)
…[式1]
SN={X1,X2,…,XN}
S'N={X'1,X'2,…,X'N}
と表す。
E(SN,S'N)=D(SN)+R(SN,S'N) …[式2]
(1)特徴量の集合が有する識別能力は、例えば、各々の特徴量の情報エントロピーが大きいほど、識別能力も大きくなると考えることができる。情報エントロピーが大きいほど、各々の特徴量のとる値(確率変数Xn)の出現確率が均等に近くなるため、冗長性が小さくなり、識別能力が大きくなる。逆に、各々の特徴量のとる値(確率変数Xn)の出現確率が、ある特定の値に偏っていると、冗長性が大きくなり、情報エントロピーが小さくなるため、識別能力は小さくなる。
H(Xn)=-ΣAA p(xn)log p(xn) …[式3]
なお、Σの下の添え字AAはxn∈χnを意味する。
D(SN)=Σn=1 N H(Xn) …[式4]
I(Xn;Xk)=ΣBBΣCC p(xn,xk)log[p(xn,xk)/{p(xn)p(xk)}] …[式5]
なお、Σの下の添え字BBはxn∈χnを意味し、同じくCCはxk∈χkを意味する。
p(+1,+1)=Pr(Xn=+1,Xk=+1)、p(+1,0)=Pr(Xn=+1,Xk=0)、
p(+1,-1)=Pr(Xn=+1,Xk=-1)、p(0,+1)=Pr(Xn=0,Xk=+1)、
p(0,0)=Pr(Xn=0,Xk=0)、p(0,-1)=Pr(Xn=0,Xk=-1)、
p(-1,+1)=Pr(Xn=-1,Xk=+1)、p(-1,0)=Pr(Xn=-1,Xk=0)、
p(-1,-1)=Pr(Xn=-1,Xk=-1)
となり、元画像群の特徴量nと特徴量kの組み合わせの出現頻度から、同時確率を算出すればよい。
D(SN)=-Σn=1 N Σk=n+1 N I(Xn;Xk) …[式6]
D(SN)=Σn=1 N H(Xn)-Σn=1 N Σk=n+1 N I(Xn;Xk) [式7]
D(SN)=H(X1,X2,…,XN)=-ΣDD p(x1,x2,…,xN)log p(x1,x2,…,xN) …[式8]
なお、Σの下の添え字DDは、x1∈χ1,x2∈χ2,…,xN∈χNを意味する。
(1)特徴量の集合が有する頑健性R(SN,S'N)は、例えば、各々の特徴量に対して、画像の改変前後において、特徴量の値が変化しない度合いを算出し、その総和として求めることができる。これは、供給される元画像群の特徴量の値(確率変数Xn)と、改変画像群の対応する特徴量の値(確率変数X'n)とを比較して、同値確率(特徴量の値が一致する確率=変化しない確率)を測定することで算出することができる。ここで、特徴量nの、画像の改変前後の特徴量の値の同値確率をp(xn=x'n)と表す。例えば、特徴量が上述した多形状領域比較特徴量である場合は、p(xn=x'n)=Pr(Xn=+1,X'n=+1)+Pr(Xn=0,X'n=0)+Pr(Xn=-1,X'n=-1)となる。特徴量の集合が有する頑健性R(SN,S'N)は、例えば、各々の特徴量の同値確率p(xn=x'n)の総和として、次式で算出できる。
R(SN,S'N)=Σn=1 N p(xn=x'n) …[式9]
H(Xn|X'n)=-ΣEE ΣFF p(xn,x'n)log
p(xn|x'n) …[式10]
なお、Σの下の添え字EEはxn∈χnを意味し、同じくFFはx'n∈χnを意味する。
p(+1|+1)=Pr(Xn=+1|X'n=+1)、p(+1|0)=Pr(Xn=+1|X'n=0)、
p(+1|-1)=Pr(Xn=+1|X'n=-1)、p(0|+1)=Pr(Xn=0|X'n=+1)、
p(0|0)=Pr(Xn=0|X'n=0)、p(0|-1)=Pr(Xn=0|X'n=-1)、
p(-1|+1)=Pr(Xn=-1|X'n=+1)、p(-1|0)=Pr(Xn=-1|X'n=0)、
p(-1|-1)=Pr(Xn=-1|X'n=-1)
となる。
R(SN,S'N)=-Σn=1 N H(Xn|X'n) …[式11]
識別能力と頑健性の和E(SN,S'N)は、式2に基づいて、例えば、識別能力D(SN)の算出方法として式4、式6、式7、式8のいずれか、頑健性R(SN,S'N)の算出方法として式9、式11のいずれか、を組み合わせて和を算出してもよい。
E(SN,S'N)=αD(SN)+(1-α)R(SN,S'N) …[式12]
E(SN,S'N)=D(SN)+R(SN,S'N)
=Σn=1 N H(Xn)-Σn=1 N Σk=n+1 N I(Xn;Xk)-Σn=1 N H(Xn|X'n)
=Σn=1 N I(Xn;X'n)-Σn=1 N Σk=n+1 N I(Xn;Xk) …[式13]
S1={X1}
S2={X1,X2}
S3={X1,X2,X3}
…
SN={X1,X2,…,XN}
という形で、特徴量を1つずつ追加していく。
E(SN,S'N)-E(SN-1,S'N-1) …[式14]
が最大となるような特徴量(確率変数)Xnを追加する。
E(SN,S'N)-E(SN-1,S'N-1)=I(Xn;X'n)-Σk=1 N-1 I(Xn;Xk) …[式15]
12…特徴量抽出パラメータ生成手段
13…特徴量抽出手段
131、132…特徴量抽出部
14…特徴量選択手段
21…元画像記憶手段
22…改変画像記憶手段
23…特徴量抽出パラメータ記憶手段
24…特徴量記憶手段
Claims (25)
- 複数の元画像と、該複数の元画像に対して改変処理を施した複数の改変画像と、の各々の画像からM種類の特徴量を抽出する特徴量抽出手段と、
元画像とその改変画像どうしおよび同一の元画像の改変画像どうしは同一の画像、それら以外の画像どうしは異なる画像として扱って、異なる画像を識別できる度合いである識別能力と、画像への改変処理によって特徴量の値が変化しない度合いである頑健性とを評価基準として、前記各々の画像から抽出された前記M種類の特徴量を評価し、前記M種類の特徴量からMより少ないN種類の特徴量の集合を、画像を識別するための特徴量として選択する特徴量選択手段と、
を備えることを特徴とする特徴量選択装置。 - 前記特徴量抽出手段は、特徴量の抽出方法を規定するパラメータである特徴量抽出パラメータに従って、前記M種類の特徴量を抽出する
ことを特徴とする請求項1に記載の特徴量選択装置。 - 前記M種類の特徴量の抽出方法を規定する特徴量抽出パラメータとして、画像から特徴量を抽出する領域として、対をなす2つの部分領域の形状の組み合わせと、対をなす2つの部分領域の相対的な位置関係との双方が、他の少なくとも1つの部分領域対と相違する1以上の部分領域対を含むように規定したM種類の部分領域対に関する定義情報を用いる
ことを特徴とする請求項1または2に記載の特徴量選択装置。 - 前記特徴量は、前記画像から抽出した物理量を量子化して得られる量子化値である
ことを特徴とする請求項1乃至3の何れか1項に記載の特徴量選択装置。 - 複数の前記元画像を記憶する元画像記憶手段と、
前記元画像に対して改変処理を施した改変画像を生成する画像改変手段と、
前記生成された改変画像を記憶する改変画像記憶手段と、
を備えることを特徴とする請求項1乃至4の何れか1項に記載の特徴量選択装置。 - 前記画像改変手段は、画像のサイズの変換、画像のアスペクト比の変換、画像の色調の調整、画像のモノクロ化、画像への各種フィルタ処理、画像への局所的な加工、画像の幾何変換、画像への黒帯付加、画像の再キャプチャリングのうちの何れか1つ、または複数の改変処理を行う
ことを特徴とする請求項5に記載の特徴量選択装置。 - 前記M種類の特徴量の抽出方法を規定する特徴量抽出パラメータを生成する特徴量抽出パラメータ生成手段と、
前記生成された前記M種類の特徴量抽出パラメータを記憶する特徴量抽出パラメータ記憶手段と、
を備えることを特徴とする請求項2乃至6の何れか1項に記載の特徴量選択装置。 - (段落0036)
前記特徴量抽出パラメータ生成手段は、擬似乱数列を発生させ、発生した乱数に基づいて前記特徴量抽出パラメータを生成する
ことを特徴とする請求項7に記載の特徴量選択装置。 - 前記特徴量選択手段は、前記識別能力と前記頑健性との和の評価式の値が大きくなるようなN種類の特徴量の集合を選択する
ことを特徴とする請求項1乃至8の何れか1項に記載の特徴量選択装置。 - 前記特徴量選択手段は、N種類の特徴量の識別能力を、前記各々の画像から抽出された特徴量を用いて、N種類の各々の特徴量の情報エントロピーを総和した値、N種類の特徴量を含む集合の結合エントロピー、またはN種類の特徴量どうしの相互情報量を総和した値、として計算する
ことを特徴とする請求項9に記載の特徴量選択装置。 - 前記特徴量選択手段は、N種類の特徴量の頑健性を、前記元画像から抽出された特徴量と前記改変画像から抽出された特徴量とを用いて、N種類の各々の特徴量の改変前後の特徴量の同値確率を総和した値、またはN種類の各々の特徴量の条件付きエントロピーを総和した値、として計算する
ことを特徴とする請求項9に記載の特徴量選択装置。 - 前記特徴量選択手段は、追加前の特徴量の集合の前記評価式の値と、追加後の特徴量の集合の前記評価式の値との差分が最大となるような特徴量を1つずつ追加していくことにより、N種類の特徴量を選択する
ことを特徴とする請求項9乃至11の何れか1項に記載の特徴量選択装置。 - 複数の元画像と、該複数の元画像に対して改変処理を施した複数の改変画像と、の各々の画像からM種類の特徴量を抽出し、
元画像とその改変画像どうしおよび同一の元画像の改変画像どうしは同一の画像、それら以外の画像どうしは異なる画像として扱って、異なる画像を識別できる度合いである識別能力と、画像への改変処理によって特徴量の値が変化しない度合いである頑健性とを評価基準として、前記各々の画像から抽出された前記M種類の特徴量を評価し、前記M種類の特徴量からMより少ないN種類の特徴量の集合を、画像を識別するための特徴量として選択する
ことを特徴とする特徴量選択方法。 - 前記M種類の特徴量の抽出では、特徴量の抽出方法を規定するパラメータである特徴量抽出パラメータに従って、前記M種類の特徴量を抽出する
ことを特徴とする請求項13に記載の特徴量選択方法。 - 前記M種類の特徴量の抽出方法を規定する特徴量抽出パラメータとして、画像から特徴量を抽出する領域として、対をなす2つの部分領域の形状の組み合わせと、対をなす2つの部分領域の相対的な位置関係との双方が、他の少なくとも1つの部分領域対と相違する1以上の部分領域対を含むように規定したM種類の部分領域対に関する定義情報を用いる
ことを特徴とする請求項13または14に記載の特徴量選択方法。 - 前記特徴量は、前記画像から抽出した物理量を量子化して得られる量子化値である
ことを特徴とする請求項13乃至15の何れか1項に記載の特徴量選択方法。 - さらに、前記元画像に対して改変処理を施した前記改変画像を生成する
ことを特徴とする請求項13乃至16の何れか1項に記載の特徴量選択方法。 - 前記改変画像の生成では、画像のサイズの変換、画像のアスペクト比の変換、画像の色調の調整、画像のモノクロ化、画像への各種フィルタ処理、画像への局所的な加工、画像の幾何変換、画像への黒帯付加、画像の再キャプチャリングのうちの何れか1つ、または複数の改変処理を行う
ことを特徴とする請求項17に記載の特徴量選択方法。 - さらに、前記M種類の特徴量の抽出方法を規定する前記特徴量抽出パラメータを生成する
ことを特徴とする請求項14乃至18の何れか1項に記載の特徴量選択方法。 - 前記特徴量抽出パラメータの生成では、擬似乱数列を発生させ、発生した乱数に基づいて前記特徴量抽出パラメータを生成する
ことを特徴とする請求項7に記載の特徴量選択方法。 - 前記画像を識別するための特徴量の選択では、前記識別能力と前記頑健性との和の評価式の値が大きくなるようなN種類の特徴量の集合を選択する
ことを特徴とする請求項13乃至20の何れか1項に記載の特徴量選択方法。 - 前記画像を識別するための特徴量の選択では、N種類の特徴量の識別能力を、前記各々の画像から抽出された特徴量を用いて、N種類の各々の特徴量の情報エントロピーを総和した値、N種類の特徴量を含む集合の結合エントロピー、またはN種類の特徴量どうしの相互情報量を総和した値、として計算する
ことを特徴とする請求項21に記載の特徴量選択方法。 - 前記画像を識別するための特徴量の選択では、N種類の特徴量の頑健性を、前記元画像から抽出された特徴量と前記改変画像から抽出された特徴量とを用いて、N種類の各々の特徴量の改変前後の特徴量の同値確率を総和した値、またはN種類の各々の特徴量の条件付きエントロピーを総和した値、として計算する
ことを特徴とする請求項21に記載の特徴量選択方法。 - 前記画像を識別するための特徴量の選択では、追加前の特徴量の集合の前記評価式の値と、追加後の特徴量の集合の前記評価式の値との差分が最大となるような特徴量を1つずつ追加していくことにより、N種類の特徴量を選択する
ことを特徴とする請求項21乃至23の何れか1項に記載の特徴量選択方法。 - コンピュータを、
複数の元画像と、該複数の元画像に対して改変処理を施した複数の改変画像と、の各々の画像からM種類の特徴量を抽出する特徴量抽出手段と、
元画像とその改変画像どうしおよび同一の元画像の改変画像どうしは同一の画像、それら以外の画像どうしは異なる画像として扱って、異なる画像を識別できる度合いである識別能力と、画像への改変処理によって特徴量の値が変化しない度合いである頑健性とを評価基準として、前記各々の画像から抽出された前記M種類の特徴量を評価し、前記M種類の特徴量からMより少ないN種類の特徴量の集合を、画像を識別するための特徴量として選択する特徴量選択手段と
して機能させるためのプログラム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/056,767 US8620087B2 (en) | 2009-01-29 | 2010-01-19 | Feature selection device |
KR1020117016483A KR101404401B1 (ko) | 2009-01-29 | 2010-01-19 | 특징량 선택 장치 |
JP2010548397A JP4766197B2 (ja) | 2009-01-29 | 2010-01-19 | 特徴量選択装置 |
CN201080005948.6A CN102301395B (zh) | 2009-01-29 | 2010-01-19 | 特征选择设备 |
EP10735596.8A EP2333718B1 (en) | 2009-01-29 | 2010-01-19 | Feature amount selecting device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009-017806 | 2009-01-29 | ||
JP2009017806 | 2009-01-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010087124A1 true WO2010087124A1 (ja) | 2010-08-05 |
Family
ID=42395390
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/000246 WO2010087124A1 (ja) | 2009-01-29 | 2010-01-19 | 特徴量選択装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US8620087B2 (ja) |
EP (1) | EP2333718B1 (ja) |
JP (1) | JP4766197B2 (ja) |
KR (1) | KR101404401B1 (ja) |
CN (1) | CN102301395B (ja) |
WO (1) | WO2010087124A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014002723A (ja) * | 2012-06-15 | 2014-01-09 | Mitsubishi Electric Corp | スケール不変の画像特徴の量子化された埋込みを用いて画像を表現する方法 |
WO2017056312A1 (ja) * | 2015-10-02 | 2017-04-06 | 富士通株式会社 | 画像処理プログラムおよび画像処理装置 |
JP2022519868A (ja) * | 2019-03-28 | 2022-03-25 | コンティ テミック マイクロエレクトロニック ゲゼルシャフト ミット ベシュレンクテル ハフツング | 敵対的攻撃の自動認識及び分類 |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2333718B1 (en) * | 2009-01-29 | 2013-08-28 | Nec Corporation | Feature amount selecting device |
US8744193B2 (en) * | 2009-03-13 | 2014-06-03 | Nec Corporation | Image signature extraction device |
CN108022252A (zh) * | 2012-01-19 | 2018-05-11 | 索尼公司 | 图像处理设备和方法 |
US9053359B2 (en) * | 2012-06-07 | 2015-06-09 | Konica Minolta Laboratory U.S.A., Inc. | Method and system for document authentication using Krawtchouk decomposition of image patches for image comparison |
JP6057786B2 (ja) * | 2013-03-13 | 2017-01-11 | ヤフー株式会社 | 時系列データ解析装置、時系列データ解析方法、およびプログラム |
WO2016022154A1 (en) * | 2014-08-08 | 2016-02-11 | Robotic Vision Technologies, LLC | Detection and tracking of item features |
US9576196B1 (en) | 2014-08-20 | 2017-02-21 | Amazon Technologies, Inc. | Leveraging image context for improved glyph classification |
US9418283B1 (en) * | 2014-08-20 | 2016-08-16 | Amazon Technologies, Inc. | Image processing using multiple aspect ratios |
JP6048688B2 (ja) * | 2014-11-26 | 2016-12-21 | 横河電機株式会社 | イベント解析装置、イベント解析方法およびコンピュータプログラム |
CN104462481A (zh) * | 2014-12-18 | 2015-03-25 | 浪潮(北京)电子信息产业有限公司 | 一种基于颜色和形状的综合图像检索方法 |
CN107045503B (zh) | 2016-02-05 | 2019-03-05 | 华为技术有限公司 | 一种特征集确定的方法及装置 |
KR101986361B1 (ko) * | 2016-09-23 | 2019-06-07 | 주식회사 모션크루 | 디지털 동영상 특징값 추출 시스템 및 방법, 및 상기 특징값을 이용한 디지털 동영상 유사도 판단 시스템 및 방법 |
CN108052500B (zh) * | 2017-12-13 | 2021-06-22 | 北京数洋智慧科技有限公司 | 一种基于语义分析的文本关键信息提取方法及装置 |
CN111860894B (zh) * | 2020-07-29 | 2024-01-09 | 宁波大学 | 斜拉桥病害属性选择方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08500471A (ja) | 1992-04-30 | 1996-01-16 | セリディアン コーポレイション | 放送セグメントを認識するための方法とシステム |
JP2002142094A (ja) * | 2000-10-31 | 2002-05-17 | Toshiba Corp | 電子透かし埋込装置、電子透かし検出装置、電子透かし埋込方法、電子透かし検出方法及び記録媒体 |
WO2006129551A1 (ja) * | 2005-05-31 | 2006-12-07 | Nec Corporation | パタン照合方法、パタン照合システム及びパタン照合プログラム |
JP2008158776A (ja) * | 2006-12-22 | 2008-07-10 | Canon Inc | 特徴検出方法及び装置、プログラム、記憶媒体 |
JP2009017806A (ja) | 2007-07-11 | 2009-01-29 | Ginga Foods Corp | 切れ目入りソーセージ及びその製造方法 |
Family Cites Families (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10243402A (ja) * | 1997-02-27 | 1998-09-11 | Toshiba Corp | 画像処理装置及び画像処理方法 |
JP3435684B2 (ja) * | 1997-09-30 | 2003-08-11 | 株式会社アドバンテスト | 画像情報処理装置 |
US6016354A (en) * | 1997-10-23 | 2000-01-18 | Hewlett-Packard Company | Apparatus and a method for reducing red-eye in a digital image |
DE69943018D1 (de) * | 1998-10-09 | 2011-01-20 | Sony Corp | Lernvorrichtung und -verfahren, erkennungsvorrichtung und verfahren, und aufnahme-medium |
WO2000031688A1 (fr) * | 1998-11-25 | 2000-06-02 | Sony Corporation | Dispositif et procede de traitement d'image, et support enregistre lisible par ordinateur |
JP3550681B2 (ja) * | 1999-12-10 | 2004-08-04 | 日本電気株式会社 | 画像検索装置及び方法、並びに類似画像検索プログラムを格納した記憶媒体 |
US7212677B2 (en) * | 2000-01-11 | 2007-05-01 | Minolta Co., Ltd. | Coder, coding method, program, and image forming apparatus for improving image data compression ratio |
JP3649992B2 (ja) * | 2000-05-12 | 2005-05-18 | 三洋電機株式会社 | 画像信号処理装置及び画像信号処理方法 |
EP1158801A3 (en) * | 2000-05-22 | 2006-09-13 | Matsushita Electric Industrial Co., Ltd. | Image communication terminal |
US6940999B2 (en) * | 2000-06-13 | 2005-09-06 | American Gnc Corp. | Method for target detection and identification by using proximity pixel information |
US6606620B1 (en) * | 2000-07-24 | 2003-08-12 | International Business Machines Corporation | Method and system for classifying semi-structured documents |
US20020138492A1 (en) * | 2001-03-07 | 2002-09-26 | David Kil | Data mining application with improved data mining algorithm selection |
US20020164070A1 (en) * | 2001-03-14 | 2002-11-07 | Kuhner Mark B. | Automatic algorithm generation |
US6996717B2 (en) * | 2001-05-24 | 2006-02-07 | Matsushita Electric Industrial Co., Ltd. | Semi-fragile watermarking system for MPEG video authentication |
WO2003058554A1 (fr) * | 2001-12-28 | 2003-07-17 | Nikon Corporation | Dispositif de traitement d'image permettant d'effectuer un jugement de similitude entre des pixels et programme de traitement d'image |
JP3896868B2 (ja) * | 2002-02-27 | 2007-03-22 | 日本電気株式会社 | パターンの特徴選択方法及び分類方法及び判定方法及びプログラム並びに装置 |
US7194630B2 (en) * | 2002-02-27 | 2007-03-20 | Canon Kabushiki Kaisha | Information processing apparatus, information processing system, information processing method, storage medium and program |
US7366909B2 (en) | 2002-04-29 | 2008-04-29 | The Boeing Company | Dynamic wavelet feature-based watermark |
US7263214B2 (en) * | 2002-05-15 | 2007-08-28 | Ge Medical Systems Global Technology Company Llc | Computer aided diagnosis from multiple energy images |
US7356190B2 (en) * | 2002-07-02 | 2008-04-08 | Canon Kabushiki Kaisha | Image area extraction method, image reconstruction method using the extraction result and apparatus thereof |
US7324927B2 (en) * | 2003-07-03 | 2008-01-29 | Robert Bosch Gmbh | Fast feature selection method and system for maximum entropy modeling |
US7680357B2 (en) * | 2003-09-09 | 2010-03-16 | Fujifilm Corporation | Method and apparatus for detecting positions of center points of circular patterns |
EP1678677A4 (en) * | 2003-09-26 | 2008-02-20 | Agency Science Tech & Res | METHOD AND SYSTEM FOR PROTECTING AND AUTHENTICATING A DIGITAL IMAGE |
US20050276454A1 (en) * | 2004-06-14 | 2005-12-15 | Rodney Beatson | System and methods for transforming biometric image data to a consistent angle of inclination |
US7394940B2 (en) * | 2004-11-19 | 2008-07-01 | International Business Machines Corporation | Digital video media duplication or transmission quality measurement |
JP4728104B2 (ja) * | 2004-11-29 | 2011-07-20 | 株式会社日立製作所 | 電子画像の真正性保証方法および電子データ公開システム |
WO2007015452A1 (ja) * | 2005-08-04 | 2007-02-08 | Nippon Telegraph And Telephone Corporation | 電子透かし埋め込み方法、電子透かし埋め込み装置、電子透かし検出方法、電子透かし検出装置、及びプログラム |
JP4592652B2 (ja) * | 2005-09-09 | 2010-12-01 | 株式会社東芝 | 電子透かし埋め込み装置及び方法、電子透かし検出装置及び方法、並びにプログラム |
US8700403B2 (en) * | 2005-11-03 | 2014-04-15 | Robert Bosch Gmbh | Unified treatment of data-sparseness and data-overfitting in maximum entropy modeling |
ITRM20060213A1 (it) * | 2006-04-13 | 2007-10-14 | Univ Palermo | Metodo di elaborazione di immagini biomediche |
US7848592B2 (en) * | 2006-07-31 | 2010-12-07 | Carestream Health, Inc. | Image fusion for radiation therapy |
US20080279416A1 (en) * | 2007-05-11 | 2008-11-13 | Motorola, Inc. | Print matching method and system using phase correlation |
JP5231839B2 (ja) * | 2008-03-11 | 2013-07-10 | 株式会社東芝 | パターン認識装置及びその方法 |
DE102008016807A1 (de) * | 2008-04-02 | 2009-10-22 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Verfahren und Vorrichtung zur Segmentation einer Läsion |
US8200034B2 (en) * | 2008-04-14 | 2012-06-12 | New Jersey Institute Of Technology | Detecting double JPEG compression in images |
US8358837B2 (en) * | 2008-05-01 | 2013-01-22 | Yahoo! Inc. | Apparatus and methods for detecting adult videos |
US20090296989A1 (en) * | 2008-06-03 | 2009-12-03 | Siemens Corporate Research, Inc. | Method for Automatic Detection and Tracking of Multiple Objects |
JP5294343B2 (ja) * | 2008-06-10 | 2013-09-18 | 国立大学法人東京工業大学 | 画像位置合わせ処理装置、領域拡張処理装置及び画質改善処理装置 |
WO2010044214A1 (ja) * | 2008-10-14 | 2010-04-22 | パナソニック株式会社 | 顔認識装置および顔認識方法 |
US9269154B2 (en) * | 2009-01-13 | 2016-02-23 | Futurewei Technologies, Inc. | Method and system for image processing to classify an object in an image |
EP2333718B1 (en) * | 2009-01-29 | 2013-08-28 | Nec Corporation | Feature amount selecting device |
WO2010122721A1 (ja) * | 2009-04-22 | 2010-10-28 | 日本電気株式会社 | 照合装置、照合方法および照合プログラム |
CN102239687B (zh) * | 2009-10-07 | 2013-08-14 | 松下电器产业株式会社 | 追踪对象选择装置、方法及其电路 |
TW201145992A (en) * | 2010-06-09 | 2011-12-16 | Hon Hai Prec Ind Co Ltd | PTZ camera and method for positioning objects of the PTZ camera |
WO2012154216A1 (en) * | 2011-05-06 | 2012-11-15 | Sti Medical Systems, Llc | Diagnosis support system providing guidance to a user by automated retrieval of similar cancer images with user feedback |
-
2010
- 2010-01-19 EP EP10735596.8A patent/EP2333718B1/en active Active
- 2010-01-19 CN CN201080005948.6A patent/CN102301395B/zh active Active
- 2010-01-19 US US13/056,767 patent/US8620087B2/en active Active
- 2010-01-19 WO PCT/JP2010/000246 patent/WO2010087124A1/ja active Application Filing
- 2010-01-19 JP JP2010548397A patent/JP4766197B2/ja active Active
- 2010-01-19 KR KR1020117016483A patent/KR101404401B1/ko active IP Right Grant
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08500471A (ja) | 1992-04-30 | 1996-01-16 | セリディアン コーポレイション | 放送セグメントを認識するための方法とシステム |
JP2002142094A (ja) * | 2000-10-31 | 2002-05-17 | Toshiba Corp | 電子透かし埋込装置、電子透かし検出装置、電子透かし埋込方法、電子透かし検出方法及び記録媒体 |
WO2006129551A1 (ja) * | 2005-05-31 | 2006-12-07 | Nec Corporation | パタン照合方法、パタン照合システム及びパタン照合プログラム |
JP2008158776A (ja) * | 2006-12-22 | 2008-07-10 | Canon Inc | 特徴検出方法及び装置、プログラム、記憶媒体 |
JP2009017806A (ja) | 2007-07-11 | 2009-01-29 | Ginga Foods Corp | 切れ目入りソーセージ及びその製造方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2333718A4 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014002723A (ja) * | 2012-06-15 | 2014-01-09 | Mitsubishi Electric Corp | スケール不変の画像特徴の量子化された埋込みを用いて画像を表現する方法 |
WO2017056312A1 (ja) * | 2015-10-02 | 2017-04-06 | 富士通株式会社 | 画像処理プログラムおよび画像処理装置 |
JPWO2017056312A1 (ja) * | 2015-10-02 | 2018-02-01 | 富士通株式会社 | 画像処理プログラムおよび画像処理装置 |
US10339418B2 (en) | 2015-10-02 | 2019-07-02 | Fujitsu Limited | Computer-readable storage medium storing image processing program and image processing apparatus |
JP2022519868A (ja) * | 2019-03-28 | 2022-03-25 | コンティ テミック マイクロエレクトロニック ゲゼルシャフト ミット ベシュレンクテル ハフツング | 敵対的攻撃の自動認識及び分類 |
JP7248807B2 (ja) | 2019-03-28 | 2023-03-29 | コンティ テミック マイクロエレクトロニック ゲゼルシャフト ミット ベシュレンクテル ハフツング | 敵対的攻撃の自動認識及び分類 |
Also Published As
Publication number | Publication date |
---|---|
EP2333718B1 (en) | 2013-08-28 |
EP2333718A1 (en) | 2011-06-15 |
KR101404401B1 (ko) | 2014-06-05 |
US20110135203A1 (en) | 2011-06-09 |
EP2333718A4 (en) | 2011-11-09 |
KR20110103423A (ko) | 2011-09-20 |
JP4766197B2 (ja) | 2011-09-07 |
CN102301395B (zh) | 2014-08-06 |
US8620087B2 (en) | 2013-12-31 |
CN102301395A (zh) | 2011-12-28 |
JPWO2010087124A1 (ja) | 2012-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4766197B2 (ja) | 特徴量選択装置 | |
Wan et al. | Transductive zero-shot learning with visual structure constraint | |
Özkan et al. | Performance analysis of state-of-the-art representation methods for geographical image retrieval and categorization | |
Shuai et al. | Fingerprint indexing based on composite set of reduced SIFT features | |
Ouyang et al. | Robust hashing for image authentication using SIFT feature and quaternion Zernike moments | |
WO2012124000A1 (ja) | 画像認識システム、画像認識方法および画像認識用プログラムが格納された非一時的なコンピュータ可読媒体 | |
CN101853486A (zh) | 一种基于局部数字指纹的图像拷贝检测方法 | |
Varna et al. | Modeling and analysis of correlated binary fingerprints for content identification | |
Liang et al. | Robust image hashing with isomap and saliency map for copy detection | |
Roy et al. | Digital image forensics | |
US8170341B2 (en) | Image signature extraction device | |
Cui et al. | Robust shoeprint retrieval method based on local‐to‐global feature matching for real crime scenes | |
Wang et al. | Attention-based deep metric learning for near-duplicate video retrieval | |
CN109447173A (zh) | 一种基于图像全局特征和局部特征的图像匹配方法 | |
Li et al. | SIFT keypoint removal via directed graph construction for color images | |
Panzade et al. | Copy-move forgery detection by using HSV preprocessing and keypoint extraction | |
JP5833499B2 (ja) | 高次元の特徴ベクトル集合で表現されるコンテンツを高精度で検索する検索装置及びプログラム | |
CN100535926C (zh) | 数据处理,图像处理和图像分类方法及设备 | |
Amiri et al. | Copy-move forgery detection using a bat algorithm with mutation | |
Deshpande et al. | Latent fingerprint identification system based on a local combination of minutiae feature points | |
Weng et al. | Supervised multi-scale locality sensitive hashing | |
Law et al. | Hybrid pooling fusion in the bow pipeline | |
Khedher et al. | Local sparse representation based interest point matching for person re-identification | |
Voloshynovskiy et al. | On accuracy, robustness, and security of bag-of-word search systems | |
Wen et al. | Classification of firing pin impressions using HOG‐SVM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080005948.6 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10735596 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010548397 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13056767 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010735596 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 20117016483 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |