WO2014061221A1 - Dispositif d'extraction de sous-régions d'images, procédé d'extraction de sous-régions d'images et programme pour l'extraction de sous-régions d'images - Google Patents
Dispositif d'extraction de sous-régions d'images, procédé d'extraction de sous-régions d'images et programme pour l'extraction de sous-régions d'images Download PDFInfo
- Publication number
- WO2014061221A1 WO2014061221A1 PCT/JP2013/005877 JP2013005877W WO2014061221A1 WO 2014061221 A1 WO2014061221 A1 WO 2014061221A1 JP 2013005877 W JP2013005877 W JP 2013005877W WO 2014061221 A1 WO2014061221 A1 WO 2014061221A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- feature point
- feature
- search
- registered image
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
Definitions
- the present invention relates to an image partial region extraction apparatus, an image partial region extraction method, and an image partial region extraction program that extract a partial region of an image corresponding to a search target image from a registered image.
- Patent Document 1 describes a partial image search system that detects a similar partial image from an accumulated image group.
- the partial image search system described in Patent Literature 1 includes an accumulated feature extraction unit, an accumulated partial region feature extraction unit, an indexing unit, a target feature extraction unit, a target partial region feature extraction unit, an index search unit, A feature matching unit, a matching result determination unit, a feature re-matching unit, a matching result redetermination unit, a search result candidate selection unit, a next target partial region feature selection unit, and a next search candidate feature selection unit Has been.
- the partial image search system described in Patent Document 1 performs the following operations from Step S1 to Step S15.
- Step S1 the cumulative feature extraction unit reads an accumulated image.
- Step S2 the cumulative feature extraction unit extracts a stored feature that is a feature of the image from each image of the input stored image group.
- Step S3 the accumulated partial region feature extraction unit reads a set of accumulated features, and sets a first target window of a predetermined size in a certain location of the accumulated image. Then, the storage partial region feature extraction unit extracts image characteristics (storage partial region features) of the partial region included in the set first target window. A plurality of first attention windows are set at predetermined intervals in each accumulated image, and the accumulated partial area features are extracted from the partial areas in each attention window.
- Step S4 the index assigning unit reads a set of accumulated partial area features and clusters each accumulated partial area feature. Then, the index assigning unit outputs a vector quantization code word corresponding to each partial region feature as an index.
- the target feature extraction unit reads the target image.
- the size of the target image is set to be equal to or greater than the sum (total) of the margin and the size of the first window of interest.
- the target feature extraction unit performs feature extraction from the target image by the same method as the above-described accumulated feature extraction unit, and outputs a set of features (target feature) extracted from the target image.
- the target partial region feature extraction unit reads a set of target features extracted by the target feature extraction unit, and uses the target window used by the storage partial region feature extraction unit at one location of the target image.
- a feature window ie, first feature window
- image features target partial region features included in the first feature window are extracted.
- the target partial region feature extraction unit extracts image features using the same extraction method as the storage partial region feature extraction unit, and outputs the extraction result as a set of target partial region features.
- the target partial region feature extraction unit sets the first target window so that it does not overlap in the target image and there is no gap, The target partial region feature is extracted at each position while shifting them by 1 pixel.
- the index search unit reads the index output from the index assigning unit and the set of target partial region features output from the target partial region feature extraction unit, and uses the read index to Accumulated partial region features similar to the set of partial region features are extracted as search candidate features.
- Step S9 the feature matching unit reads the set of target partial region features and the set of search candidate features, and matches both sets. At this time, the feature matching unit obtains the distance value d ( ⁇ ) between the matched search candidate feature and the target partial region feature as a result of the matching calculation (calculation of the distance between the search candidate feature and the target partial region feature). Output.
- Step S10 the collation result determination unit reads the distance value d ( ⁇ ) and determines whether there is a possibility that a partial image similar to the target image exists in the current collation location.
- the collation result determination unit outputs the accumulated partial region feature as the collation location.
- the process of step S14 shown below is performed.
- Step S11 the feature re-collation unit reads a set of accumulated features, a set of target features, and a matching portion, and uses the attention window having the same size (vertical x horizontal) as the target image to the read accumulated features. A certain second attention window is set. Then, the feature re-collation unit outputs a distance value d between the accumulated feature in the second window of interest and the target feature.
- Step S12 the verification result re-determination unit reads the distance value d and determines whether or not there is a possibility that a partial image similar to the target image exists in the current verification location.
- the collation result re-determination unit outputs the position of the detection location and the determination result using the collation location in the accumulated image as a detection location.
- the following processing in step S14 is performed.
- the search result candidate selection unit outputs a search result candidate from the stored image.
- Step S14 The next target partial region feature selection unit detects the presence or absence of the target partial region feature to be checked next. When it is detected that there is no collation part, the process of step S15 shown below is performed. On the other hand, when it is detected that there is a collation part, the target partial region feature to be collated next is selected, and the processes after step S9 are performed.
- Step S15 The next search candidate feature selection unit reads a set of search candidate features and selects a search candidate feature to be matched next.
- the next search candidate feature selection unit terminates the process when there is no search candidate feature to be collated, and when there is a search candidate feature to be collated, selects a search candidate feature to be collated next. Thereafter, the processing after step S9 is performed.
- Patent Document 2 describes a method of extracting feature points from an image.
- Patent Document 3 describes that the area ratio of two connected regions is calculated as an invariant with respect to affine transformation.
- Patent Document 4 describes a mixed media document system that forms a mixed media document including at least two types of media (for example, paper printed as a first medium and digital content as a second medium).
- the system described in Patent Document 4 includes a content-based search database constructed with an index table, and searches for content using a text-based index.
- a two-dimensional geometric positional relationship between objects extracted from a printed document is stored in an index table, and a candidate for a document is obtained from a given data from the index table. Calculated based on the original.
- Patent Document 5 describes an image search device that searches for an image including a region having characteristics similar to those of a search key image.
- the image search device described in Patent Document 5 extracts feature amounts for a plurality of regions in a registered image, compares the extracted feature amounts with the feature amounts extracted from the search key image, and is similar to the search key image A similar image including the selected area is searched.
- Patent Document 6 a feature amount calculated from a feature point of a captured digital image is compared with a feature amount obtained from a feature point of a document / image registered in a database. Describes a method for retrieving a document / image corresponding to a digital image from a database.
- JP-A-2005-352990 paragraphs 0015 to 0061
- WO2010 / 053109 Publication WO2009 / 110410
- JP 2009-506394 A Japanese Patent Laid-Open No. 2002-245048 WO 2006/092957
- the resolution and shooting angle of the image to be searched are the resolution and shooting of the registered image. If the angle does not match, there is a problem that search cannot be performed properly.
- Patent Document 6 when the method described in Patent Document 6 is used, a document or image registered in the database can be searched, but a partial region of the document or image cannot be detected.
- the present invention provides an image partial region extraction device, an image partial region extraction method, and an image portion that can extract a partial region corresponding to the search image from the registered image even if a search image having a resolution or shooting angle different from that of the registered image is used.
- the object is to provide a region extraction program.
- An image partial region extraction device is an image partial region extraction device that extracts a partial region corresponding to a search image from a registered image, and is ordered based on a predetermined rule for each feature point of interest of the registered image.
- a feature amount storage unit for storing a registered image feature amount that is a feature amount that is invariant to geometric transformation, calculated based on a feature point arrangement that is a set of feature points existing in the vicinity of the feature point of interest; The degree of coincidence between the search image feature amount, which is a feature amount invariant to geometric transformation, calculated based on the feature point arrangement for each feature point of interest in the search image, and the registered image feature amount
- a corresponding feature point detecting unit that detects a corresponding feature point that is a feature point included in the feature point arrangement in the registered image corresponding to a feature point included in the feature point arrangement in the search image, and among the corresponding feature points, With feature points in the search image Characterized in that a corresponding area determining section ⁇ degree extracting a partial region in the
- An image partial region extraction method is an image partial region extraction method for extracting a partial region corresponding to a search image from a registered image, and is ordered based on a predetermined rule for each feature point of interest of the registered image.
- a registered image feature quantity that is a feature quantity that is invariant to a geometric transformation calculated based on a feature point arrangement that is a set of feature points existing in the vicinity of the feature point of interest, and a feature of interest in a search image
- Corresponding to the feature points included in the feature point arrangement in the search image by comparing the degree of coincidence with the search image feature quantity, which is a feature quantity invariant to the geometric transformation calculated based on the feature point arrangement for each point
- the corresponding feature point which is the feature point included in the feature point arrangement in the registered image is detected, and the registration specified by the corresponding feature point having a higher degree of matching with the feature point in the search image among the corresponding feature points Partial region in the image And extracting a.
- An image partial region extraction program is an image partial region extraction program applied to a computer that extracts a partial region corresponding to a search image from a registered image, and stores the feature points of interest of the registered image in the computer.
- a registered image feature that is a feature quantity that is invariant to a geometric transformation calculated based on a feature point arrangement that is a set of feature points existing in the vicinity of the feature point of interest ordered according to a predetermined rule
- the degree of coincidence between the search image feature quantity, which is a feature quantity invariant to the geometric transformation calculated based on the feature point arrangement for each feature point of interest in the search image Corresponding feature point detection processing for detecting a corresponding feature point corresponding to a feature point included in the registered feature image in the registered image corresponding to the feature point included in the feature point layout; Characterized in that to execute the corresponding region determination processing degree of matching between characteristic points in the image to extract a partial region in the registered image which is specified by a higher corresponding feature point.
- a partial region corresponding to the search image can be extracted from the registered image.
- 14 is a flowchart illustrating an operation example of the information processing apparatus illustrated in the third embodiment. It is a flowchart which shows the operation example which determines action information. It is a block which shows the outline
- FIG. 1 is a block diagram showing a configuration example of a first embodiment of an image partial region extraction apparatus according to the present invention.
- the image partial region extraction apparatus of the present embodiment includes a registered image feature point generation unit R201, a registered image feature point arrangement generation unit R202, a registered image feature amount generation unit R203, a search image feature point generation unit Q201, and a search image.
- a feature point arrangement generation unit Q202, a search image feature amount generation unit Q203, a corresponding feature point detection unit 204, and a corresponding region determination unit 205 are provided.
- the registered image feature point generation unit R201 generates a feature point from the registered image. Specifically, the registered image feature point generation unit R201 generates a feature point from the registered image using a known method.
- the registered image feature point generation unit R201 may extract a connected area from the registered image using a method such as binarization or color space clustering, and may extract a barycentric point of each connected area as a feature point.
- the connected region is a region in which pixels adjacent to each other among pixels determined to belong to the same color are connected and is a known concept in the field of image processing. Further, the unconnected area is an area other than the connected area.
- FIG. 2 is an explanatory diagram showing an example of a connected area.
- one connected area 51 is extracted from the image 50 in which the letter “A” is written, and the center of gravity of the connected area 51 is extracted as the feature point 52.
- a set of pixels that form the letter “A” corresponds to a connected region.
- the method by which the registered image feature point generation unit R201 extracts the feature points is not limited to the above method.
- the registered image feature point generation unit R201 may use, for example, a method in which the registered image feature point extraction module described in Patent Document 2 extracts feature points, or before extracting feature points, such as a Gaussian filter. A filter may be applied.
- a method in which the registered image feature point generation unit R201 extracts the center of gravity of the connected region as a feature point will be described as an example.
- the registered image feature point arrangement generation unit R202 generates one or more feature point arrangements from the feature points obtained by the registered image feature point generation unit R201. Specifically, the registered image feature point arrangement generation unit R202 generates a feature point arrangement using a known method.
- the feature point arrangement refers to a set of feature points that exist in the vicinity of each other ordered based on a predetermined rule.
- the registered image feature point arrangement generation unit R202 may generate the feature point arrangement using, for example, the method described in Patent Document 2. Specifically, the registered image feature point arrangement generation unit R202 may generate feature point arrangements according to the following procedure using each feature point extracted from the registered image as a feature point of interest.
- the registered image feature point arrangement generation unit R202 obtains a feature point group existing in the vicinity of a feature point (attention feature point) obtained from the registered image.
- the registered image feature point arrangement generation unit R202 selects a feature point closest to each feature point of interest as a first element.
- the registered image feature point arrangement generation unit R202 assumes a half line passing through the feature point having the feature point of interest as the first element and the first element, and rotates the half line around the end point (target feature point).
- the registered image feature point arrangement generation unit R202 sequentially selects feature points after the second element in the order in which the other feature points intersect with the half line.
- the registered image feature value generation unit R203 calculates a feature value for each of the feature point arrangements generated by the registered image feature point arrangement generation unit R202.
- the feature amount calculated here is a feature amount that is invariant to geometric transformation.
- the invariant calculation method described in Patent Document 2 can be used.
- the registered image feature value generation unit R203 associates and associates at least one feature point sequence for calculating the geometric invariant with an order given in advance to the feature points in the feature point arrangement.
- An invariant may be calculated from each feature point permutation.
- the registered image feature value generation unit R203 may generate one or more triangles formed by connecting a plurality of feature points, and may use an invariant calculated based on the area of each triangle as the feature value.
- the method for calculating the feature amount based on the area of each triangle described above is referred to as a triangle-based invariant calculation method.
- the method by which the registered image feature value generation unit R203 calculates the feature value is not limited to the invariant calculation method based on the triangle.
- the registered image feature value generation unit R203 may calculate the feature value using a method described in Patent Document 3.
- the registered image feature value generation unit R203 may use the area ratio of each connected region extracted by the registered image feature point generation unit R201 to extract feature points as a feature value.
- the area ratio of the two connected regions is an invariant with respect to the affine transformation.
- the connected area is represented as a black pixel area.
- the number of feature points included in the feature point set is n, those obtained by arranging the expression a n in order from the equation a 1 below, the characteristic quantity having the invariant properties modified by affine transformation.
- the feature value having a pseudo-invariant properties against projective transformation become.
- a method for obtaining an invariant (feature value) based on the ratio of the number of black pixels in the connected region is referred to as an invariant calculation method based on the connected region area.
- the case where a binarized registration image is used has been described as an example of the invariant calculation method based on the connected region area, but the registration image used is not limited to a binarized registration image.
- the registered image feature value generation unit R203 extracts the connected region using pixels included in the clustered color space.
- the invariant (feature value) may be obtained from the ratio of the number of pixels in each connected region.
- the feature values described above may be used alone or in combination of two or more feature values.
- a feature amount obtained by an invariant calculation method based on a triangle and a feature amount obtained by an invariant calculation method based on a connected region area are used in combination, and a combination of both feature amounts is used. It is described as a feature amount with respect to the feature point arrangement or simply as a feature amount.
- the registered image feature value generation unit R203 may include a registered image feature value storage device (not shown) that stores the calculated feature value. In this case, the registered image feature value generation unit R203 stores the calculated feature value in the registered image feature value storage device.
- Search image feature point generation unit Q201 generates a feature point from the search image.
- the search image feature point generation unit Q201 may generate a feature point from the search image by a known method, or the registered image feature point generation unit R201 uses the same method as the method of generating a feature point from the registration image. May be used to generate feature points.
- the method by which the search image feature point generation unit Q201 generates feature points is not limited to these methods.
- the search image feature point arrangement generation unit Q202 generates a feature point arrangement.
- the search image feature point arrangement generation unit Q202 may generate the feature point arrangement by a known method, for example, or the same method as the registered image feature point arrangement generation unit R202 generates the feature point arrangement from the feature points.
- the feature point arrangement may be generated using.
- the method by which the search image feature point arrangement generation unit Q202 generates the feature point arrangement is not limited to these methods.
- the search image feature amount generation unit Q203 calculates a feature amount for each of the feature point arrangements generated by the search image feature point arrangement generation unit Q202.
- the search image feature value generation unit Q203 may calculate the feature value using the same method as the registered image feature value generation unit R203 calculates the feature value.
- the method by which the search image feature value generation unit Q203 calculates the feature value is not limited to this method.
- a feature quantity that is invariant to the geometric transformation is calculated for each feature point of interest based on the feature point arrangement.
- the corresponding feature point detection unit 204 compares the feature amount generated by the registered image feature amount generation unit R203 with the feature amount generated by the search image feature amount generation unit Q203, thereby generating each feature amount. It is determined whether the feature point arrangements match.
- the corresponding feature point detection unit 204 uses the determination result as to whether or not they match, and the feature point corresponding to the feature point of the search image (hereinafter referred to as the corresponding feature point) is in the registered image. (That is, which of the feature points of the search image is extracted from the registered image). For example, this determination is performed for all combinations of the feature amount generated by the registered image feature amount generation unit R203 and the feature amount generated by the search image feature amount generation unit Q203. Further, the detection process may be speeded up by assigning an appropriate index to each feature point instead of performing the detection process for all combinations of feature amounts.
- the corresponding area determination unit 205 determines where the search image is included in the registered image by using the matching feature point arrangement detected by the corresponding feature point detection unit 204. Specifically, the corresponding area determination unit 205 extracts a partial area in the registered image specified by the corresponding feature point.
- the quantity generation unit Q203, the corresponding feature point detection unit 204, and the corresponding region determination unit 205 are realized by a CPU of a computer that operates according to a program (an image partial region extraction program).
- the program is stored in a storage unit (not shown) of the image partial region extraction device, and the CPU reads the program, and according to the program, a registered image feature point generation unit R201, a registered image feature point arrangement generation unit R202, Even if it operates as a registered image feature value generation unit R203, a search image feature point generation unit Q201, a search image feature point arrangement generation unit Q202, a search image feature value generation unit Q203, a corresponding feature point detection unit 204, and a corresponding region determination unit 205 Good.
- the image feature quantity generation unit Q203, the corresponding feature point detection unit 204, and the corresponding region determination unit 205 may each be realized by dedicated hardware.
- the configuration in which the feature amount is calculated by the image partial region extraction device is illustrated, but the feature amount of the registered image and the feature amount of the search image calculated by another device are used as the image partial region extraction device.
- the corresponding feature point detection unit 204 may be configured to receive.
- the image partial region extraction device only needs to include the corresponding feature point detection unit 204 and the corresponding region determination unit 205.
- the feature amount of the registered image calculated in advance may be stored in a storage unit (not shown) of the image partial region extraction device.
- FIG. 3 is a flowchart illustrating an operation example of the registration process.
- the registration process is a process that is performed prior to the search process, and is a process that generates feature amount data necessary for the search process.
- the registration process includes a registered image feature point generation process (step SR201), a registered image feature point arrangement generation process (step SR202), and a registered image feature amount generation process (step SR203).
- the registered image feature point generation unit R201 extracts feature points from the registered image (step SR201).
- the registered image feature point arrangement generation unit R202 generates one or more feature point arrangements based on the feature points generated by the registered image feature point generation unit R201 (step SR202).
- the registered image feature value generation unit R203 calculates the feature values of each of the one or more feature point arrangements generated by the registered image feature point arrangement generation unit R202 (step SR203).
- FIG. 4 is a flowchart showing an example of the search processing operation according to this embodiment.
- the search process is a process for determining a portion corresponding to the search image from the registered image by calculating the feature value from the search image and comparing the feature value with the feature value calculated from the registered image.
- the search processing includes search image feature point generation processing (step SQ201), search image feature point arrangement generation processing (step SQ202), search image feature amount generation processing (step SQ203), and corresponding feature point detection processing (step SQ202).
- step SQ204 and corresponding area determination processing
- the search image feature point generation unit Q201 extracts feature points from the search image (step SQ201).
- the search image feature point arrangement generation unit Q202 generates one or more feature point arrangements based on the feature points generated by the search image feature point generation unit Q201 (step SQ202).
- the search image feature value generation unit Q203 calculates the feature values of each of the one or more feature point arrangements generated by the search image feature point arrangement generation unit Q202 (step SQ203).
- the corresponding feature point detection unit 204 detects the corresponding feature point (step SQ204). Specifically, the corresponding feature point detection unit 204 detects a feature point in the registered image corresponding to the feature point in the search image.
- the process of detecting the corresponding feature points will be described in detail with reference to FIG.
- FIG. 5 is a flowchart showing an operation example of step SQ204 for detecting the corresponding feature point.
- the process for detecting the corresponding feature point includes a feature quantity comparison process (step SQ2041), a feature quantity match determination process (step SQ2042), and a feature point match count count process (step SQ2043). All of the feature values generated by the registered image feature value generation unit R203 (hereinafter referred to as registered image feature values) and the feature values generated by the search image feature value generation unit Q203 (hereinafter referred to as search image feature values). For the combinations, the process of step SQ204 described below is performed individually.
- step SQ204 first, the corresponding feature point detection unit 204 compares the combination of the registered image feature quantity and the search image feature quantity.
- the corresponding feature point detection unit 204 may obtain the distance between the registered image feature quantity and the search image feature quantity by a known method such as a square distance, a city area distance, or a vector inner product (step SQ2041). Further, the corresponding feature point detection unit 204 may obtain the distance between the registered image feature amount and the search image feature amount by the following method. It can be said that the closer this distance is, the higher the degree of coincidence of both feature quantities.
- the corresponding feature point detection unit 204 calculates a difference for each corresponding element (for example, a vector element) representing a feature amount to be compared, and when the absolute value of the difference is within a predetermined range, 1 is added to the distance, and 0 is added if the absolute value of the difference is not within a predetermined range.
- the corresponding feature point detection unit 204 calculates the distance by repeating these processes for all elements. In this case, it can be said that the larger the calculated value is, the closer the distance is, and the higher the degree of coincidence between both feature amounts.
- the corresponding feature point detection unit 204 may calculate the distance between the registered image feature value and the search image feature value by combining a plurality of the above-described methods. For example, when the invariant feature quantity and the area ratio feature quantity are used together, the corresponding feature point detection unit 204 calculates a square distance for the invariant feature quantity and takes a difference for each element for the area ratio feature quantity. The distance may be calculated by a method.
- the corresponding feature point detection unit 204 determines whether or not the registered image feature quantity matches the search image feature quantity (step SQ2042). That is, the corresponding feature point detection unit 204 compares the degree of coincidence between the registered image feature amount and the search image feature amount.
- the process of step SQ2043 is performed, and it is determined that the registered image feature quantity and the search image feature quantity do not match (“Mismatch” in step SQ2042), the process of step SQ2043 is omitted.
- Corresponding feature point detection unit 204 may determine that the feature amounts match when the distance calculated in step SQ2041 is less than or equal to a predetermined threshold value. When the feature amount is calculated from a plurality of types of feature amounts, the corresponding feature point detection unit 204 matches the entire feature amount when at least one or a predetermined number of types of feature amounts match. Alternatively, it may be determined that all the feature values match when all types of feature values match.
- the corresponding feature point detection unit 204 adds the number of times of matching feature points (step SQ2043).
- the corresponding feature point detection unit 204 may add the number of matches for each feature point in the feature point arrangement from which the feature amount is obtained. Further, the corresponding feature point detection unit 204 may add the number of matches for each feature point of interest when the feature point arrangement from which the registered image feature amount is obtained is generated. Note that the initial value of the number of feature point matches is set to 0 in the initialization process. In the following description, it is assumed that the number of matches is added for each feature point in the feature point arrangement from which the feature amount is obtained. Since feature points are shared by a plurality of feature point arrangements, a plurality of matching times of feature points are added.
- the corresponding area determination unit 205 determines a corresponding area (step SQ205). Specifically, the corresponding area determination unit 205 determines an area in the registered image corresponding to the search image as the corresponding area.
- the process of determining the corresponding area will be described in detail with reference to FIG.
- FIG. 6 is a flowchart showing an operation example of step SQ205 for determining the corresponding area.
- the process for determining the corresponding area includes a connection target node extraction process (step SQ2051), a feature point connection process (step SQ2052), a connected graph detection process (step SQ2053), and an output area determination process (step SQ2054).
- Corresponding region determination unit 205 extracts feature points having a predetermined number of matches or more as a connection target node from the registered image (step SQ2051).
- the node is a term used in graph theory.
- feature points are regarded as nodes.
- the connection target node extracted by the corresponding region determination unit 205 is a feature point having a higher degree of matching with the feature point in the search image among the corresponding feature points. Therefore, it can be said that the corresponding region determination unit 205 extracts a corresponding feature point having a higher degree of matching with the feature point in the search image among the corresponding feature points as a connection target node.
- FIG. 7 is an explanatory diagram showing an example of nodes to be linked.
- a black circle portion indicates a connection target node
- a white circle portion indicates a non-connection target node.
- the non-connection target node indicates a node that is not determined to be a connection target node among feature points (nodes).
- the corresponding area determination unit 205 adds an edge between the connection target nodes (step SQ2052).
- the edge is a term used in graph theory and means a line connecting nodes. In the following description, this edge may be referred to as a graph.
- the corresponding area determination unit 205 may add an edge between the connection target nodes when the distance between the connection target nodes is smaller than a predetermined threshold.
- the corresponding region determination unit 205 may add an edge between the connection target nodes when the distance between the connection regions including the connection target nodes is smaller than a predetermined threshold.
- the distance dist (p1, p2) between the connection region C1 and the connection region C2 is, for example, the following Expression 1. Calculated. Note that p1 and p2 are pixels arbitrarily selected from each connected region.
- FIG. 8 is an explanatory diagram showing an example of edges added between nodes to be connected by the node connection process. The edges illustrated in FIG. 8 are added so as to connect the connection target nodes illustrated in FIG.
- the corresponding area determination unit 205 detects one or more connected graphs from the graph generated in step SQ2052 (step SQ2053).
- the connected graph means a combination of a series of nodes and edges connected to each other by edges. It is known that one or more connected graphs are detected by performing a depth-first search, which is a known method in graph theory. Therefore, the corresponding region determination unit 205 may detect a connected graph by performing a depth-first search. In the example shown in FIG. 8, two connected graphs are detected.
- the corresponding region determination unit 205 determines a region to be output from the connected graph detected in step SQ2053 (step SQ2054).
- the four corner coordinates of the circumscribed rectangle of the connected region including the node j (j is an integer) in the connected graph G k (k is an integer of 1 or more) are (x min — j (k) , y min — j (k) ), (x max_j (k) , ymin_j (k) ), ( xmin_j (k) , ymax_j (k) ), ( xmax_j (k) , ymax_j (k) ).
- the corresponding area determination unit 205 is a rectangular area whose output areas are (x min_min (k) , y min_min (k) ) and (x max_max (k) , y max_max (k) ) as diagonal vertices. It can be determined.
- the coordinates of the vertex satisfy the following conditions.
- FIG. 9 is an explanatory diagram showing an example of the output area.
- a dotted rectangle illustrated in FIG. 9 indicates a circumscribed rectangle of each connection region. 9 indicate two output areas determined from the two connected graphs illustrated in FIG.
- a region circumscribing the connection region is represented by a rectangle that can be specified by the coordinates of the four corners.
- the corresponding area determination unit 205 calculates a sum area of a circumscribed rectangular area of a connected area including each node belonging to the connected graph, or a sum area of a circle area having a radius r (r is a real number greater than or equal to 0) centering on the node j. It may be an output area.
- the corresponding area determination unit 205 may determine an output area using an index indicating the probability of each output area.
- an index indicating the probability of the output area for example, the area of the output area, the number of nodes to be connected included in the output area, the number of feature points included in the output area, and the feature points of the feature points (or nodes to be connected) in the output area The maximum number of matching times, the sum of the number of matching feature points of feature points (or nodes to be connected) in the output area, and the like.
- the corresponding area determination unit 205 may determine that the larger (larger) of these indices are more likely and select a more likely output area based on these indices. .
- the corresponding area determination unit 205 specifies a connection area by specifying a connection target node connected by an edge, and extracts an area derived from the connection area as a partial area in the registered image.
- the corresponding feature point detection unit 204 detects the corresponding feature point by comparing the degree of coincidence between the registered image feature amount and the search image feature amount, and the corresponding region determination unit 205 A partial region in the registered image identified by the corresponding feature point having a higher degree of matching with the feature point in the search image is extracted from the detected corresponding feature points. Therefore, even if a search image having a resolution or shooting angle different from that of the registered image is used, a partial region corresponding to the search image can be extracted from the registered image.
- the corresponding feature point detection unit 204 determines that the search image feature quantity matches the registered image feature quantity
- the feature point in the feature point arrangement from which the search image feature quantity is calculated is calculated.
- the corresponding region determination unit 205 sets the corresponding feature points whose matching count is equal to or greater than the predetermined number of times as a connection target node, and in the registered image specified by the connection target node.
- the partial region may be extracted.
- the feature point arrangement is obtained from the center of gravity of the connected area for each of the registered image and the search image.
- a feature quantity that is invariant to geometric transformation is calculated from the feature points and the arrangement of connected regions corresponding to the feature points, and the registered image feature quantity and the search image feature quantity are compared.
- feature points corresponding to the search image are detected in the registered image, and information on the feature points is integrated to obtain an output area.
- FIG. 10 is a block diagram showing a configuration example of an information processing system capable of realizing the image partial region extraction device of the present embodiment.
- An information processing system 1 illustrated in FIG. 10 includes an arithmetic device 6 (hereinafter simply referred to as a CPU 6) represented by, for example, a CPU, and a storage medium 7. Further, the information processing system 1 may include an input / output interface 8 and a display device 9.
- arithmetic device 6 hereinafter simply referred to as a CPU 6
- the information processing system 1 may include an input / output interface 8 and a display device 9.
- the CPU 6 controls the overall operation of the information processing system 1 by executing various software programs (computer programs) in which the various means described above are installed.
- the storage medium 7 is a storage medium for storing various software programs and data necessary for the execution.
- the input / output interface 8 is used when performing data communication with the outside of the information processing system 1.
- Examples of data to be communicated include, but are not limited to, feature point arrangement data generated outside the information processing system 1 and collation result output data.
- the input / output interface 8 only needs to be able to communicate with at least the CPU 6.
- a connector for connecting a communication line capable of transmitting an external signal, a device for receiving a radio signal, or the like is used.
- a part of the signal transmission path inside the information processing system 1 may be used as the input / output interface 8 as it is.
- Another example of the input / output interface 8 is a user interface device such as a display device 9 or a speaker (not shown).
- the display device 9 is a device for displaying the result of image matching executed by the information processing system 1, and is a display device, for example.
- FIG. 10 shows the display device 9, it is not necessarily an essential component for the image partial region extraction apparatus.
- FIG. 11 is a block diagram showing a configuration example of the second embodiment of the image partial region extraction device according to the present invention.
- the image partial region extraction apparatus of the present embodiment includes a registered image feature point generation unit R201, a registered image feature point arrangement generation unit R202, a registered image feature amount generation unit R203, a search image feature point generation unit Q201, and a search image.
- a feature point arrangement generation unit Q202, a search image feature amount generation unit Q203, a corresponding feature point pair detection unit 304, and a corresponding region estimation unit 305 are provided.
- the image partial region extraction apparatus of the present embodiment includes a corresponding feature point pair detection unit 304 and a corresponding region estimation unit 305 instead of the corresponding feature point detection unit 204 and the corresponding region determination unit 205. This is different from the image partial area extracting apparatus of the first embodiment. In the description of the present embodiment, points different from the first embodiment will be mainly described.
- the corresponding feature point pair detection unit 304 stores a combination (pair) of the feature points in the search image and the feature points in the registered image that are determined to match, as a storage medium (see FIG. (Not shown).
- a combination of a feature point in the search image determined to match and a feature point in the registered image is referred to as a feature point match history.
- the corresponding region estimation unit 305 obtains geometric transformation parameters (homography matrix, affine transformation parameter, etc.) from the matching feature point arrangement detected by the corresponding feature point pair detection unit 304. Then, the corresponding region is estimated from the estimated geometric transformation parameter and the size of the search image.
- geometric transformation parameters homoography matrix, affine transformation parameter, etc.
- the image feature quantity generation unit Q203, the corresponding feature point pair detection unit 304, and the corresponding region estimation unit 305 are realized by a CPU of a computer that operates according to a program (an image partial region extraction program).
- a registered image feature point generation unit R201 a registered image feature point arrangement generation unit R202, a registered image feature amount generation unit R203, a search image feature point generation unit Q201, and a search image feature
- Each of the point arrangement generation unit Q202, the search image feature amount generation unit Q203, the corresponding feature point pair detection unit 304, and the corresponding region estimation unit 305 may be realized by dedicated hardware.
- the processing performed by the image partial region extraction apparatus according to the present embodiment is also roughly divided into two processes, a registration process and a search process. Since the registration process is the same as that of the first embodiment, the search process will be described.
- FIG. 12 is a flowchart showing an example of the search processing operation according to this embodiment.
- the corresponding feature point detection process SQ204 is replaced with the corresponding feature point pair detection process SQ304
- the corresponding area determination process SQ205 is replaced with the corresponding area estimation process SQ305.
- the corresponding feature point detection process SQ204 is replaced with the corresponding feature point pair detection process SQ304
- the corresponding area determination process SQ205 is replaced with the corresponding area estimation process SQ305.
- FIG. 13 is a flowchart showing an operation example of step SQ304 for detecting a corresponding feature point pair.
- step SQ304 is different from step SQ204 of the first embodiment in that feature point matching history storage processing SQ3043 is added after feature point matching number counting processing SQ2043.
- Corresponding feature point pair detection unit 304 when it is determined in step SQ2042 that the feature quantities match, generates a feature point matching history using the feature points indicating the matched feature quantities (step SQ3043). At this time, the corresponding feature point pair detection unit 304 may determine that the feature points of the feature point arrangement from which the feature amount is obtained match. Then, the corresponding feature point pair detection unit 304 calculates the feature points included in the feature point arrangement from which the registered image feature amount is obtained and the feature points included in the feature point arrangement from which the search image feature amount is obtained. A feature point matching history in which corresponding feature points are paired may be generated.
- FIG. 14 is an explanatory diagram showing an example of generating a feature point matching history.
- feature point arrangements of feature points ordered in the order of R1, R2, R3, R4, R5, and R6 exist in the registered image, and Q1, Q2, Q3, Q4, Q5, and Q6 It is assumed that feature point arrangements of feature points that are sequentially ordered exist in the search image.
- the corresponding feature point pair detection unit 304 determines R1 and Q1, R2 and Q2, R3 and Q3, R4 and Q4, R5 and Q5, and R6 and Q6, respectively.
- a feature point matching history may be generated by detecting as a feature point pair.
- the corresponding feature point pair detection unit 304 may determine that the feature points of interest at the time of generating the feature point arrangement from which the registered image feature amount is obtained match. At this time, the corresponding feature point pair detection unit 304 includes the attention feature point included in the feature point arrangement from which the registered image feature amount is obtained and the attention feature included in the feature point arrangement from which the search image feature amount is obtained. A feature point matching history in which feature points are paired may be generated.
- the corresponding feature point pair detection unit 304 determines that the feature points of the feature point arrangement from which the feature amount is obtained match and generates a feature point matching history.
- FIG. 15 is a flowchart showing an operation example of step SQ305 for estimating the corresponding area.
- step SQ305 is different from step SQ205 of the first embodiment in that output region determination processing SQ2054 is replaced with output region estimation processing SQ3054.
- FIG. 16 is a flowchart showing an operation example of step SQ3054 for estimating an output region.
- the corresponding region estimation unit 305 selects a plurality of feature point pairs that satisfy a condition described later from the feature point matching history (step SQ30541).
- the corresponding region estimation unit 305 may select four or more feature point pairs.
- three or more feature point pairs may be selected.
- a homography matrix is adopted as a geometric transformation parameter, and the number of feature point pairs to be selected is four.
- the conditions to be satisfied when selecting feature points are the following conditions.
- the feature points selected as feature point pairs belong to the same connected graph. -If there are 5 or more pairs of feature points belonging to the same connected graph, 4 pairs are selected at random.
- the homography matrix H is a 3 ⁇ 3 matrix that represents the relationship between the position (xr, yr) of the registered image and the position (xq, yq) of the search image, and specifically satisfies Expression 2 shown below. .
- ⁇ is a constant determined according to the values of (xr, yr) and (xq, yq).
- the corresponding region estimation unit 305 calculates an evaluation value of the geometric transformation parameter (step SQ30543). For example, the corresponding region estimation unit 305 projects all feature points of the search image onto the registered image using a homography matrix. Then, the corresponding region estimation unit 305 detects whether or not a feature point calculated from the registered image exists within a distance that is smaller than or less than a predetermined value among the projected feature points. The number of detected feature points may be used as the evaluation value.
- the corresponding area estimation unit 305 may project all the connected areas obtained from the search image or its circumscribed rectangular area and compare it with the partial image in the registered image existing at the projected position. At this time, the corresponding region estimation unit 305 may determine whether or not they match by a known method, and may use the number of matching regions as an evaluation value.
- the corresponding region estimation unit 305 may calculate the distance after detecting the feature amount, for example, and may determine that the distance matches when the distance is smaller than a certain value or less than a certain value. In addition, the corresponding region estimation unit 305 may determine whether or not they match using the normalized correlation. From the above, it can be said that the evaluation value calculated in this way indicates the certainty of the geometric transformation parameter to be used.
- the corresponding region estimation unit 305 determines whether or not the calculated evaluation value is the maximum among the evaluation values calculated so far based on the past calculation history (step SQ30544). When the calculated evaluation value exceeds the past maximum value (Yes in step S30544), the corresponding region estimation unit 305 replaces the maximum value of the evaluation value and holds the value of the homography matrix (step SQ30545). On the other hand, if the calculated evaluation value does not exceed the past maximum value (No in step SQ30544), the process proceeds to step SQ30546.
- the corresponding region estimation unit 305 determines whether or not to end the evaluation value calculation (step SQ30546).
- the corresponding region estimation unit 305 may determine that the evaluation value calculation is to be terminated when the number of evaluation value calculations exceeds a predetermined number.
- the corresponding region estimation unit 305 may determine that the calculation of the evaluation value is to be terminated when the evaluation value exceeds a predetermined value or when the evaluation value is equal to or greater than a predetermined value.
- the method for determining whether or not to end the calculation of the evaluation value is not limited to these methods. It can be said that the evaluation value calculated in this way has the highest probability that the geometric conversion parameter converts the search image into the registered image under the condition for calculating the evaluation value.
- step SQ30546 the processes after step SQ30541 are repeated.
- step SQ50547 the corresponding region estimation unit 305 extracts a region obtained by projecting the region of the search image on the registered image based on the calculated homography matrix as a partial region.
- the corresponding area estimation unit 305 projects the area of the search image in the registered image using the value of the homography matrix having the maximum evaluation value. For example, when the search image is a rectangle, the four corner coordinates of the search image are projected with a homography matrix, and a quadrangle determined by the projected four points becomes an output region.
- the corresponding feature point pair detection unit 304 determines that the search image feature quantity matches the registered image feature quantity
- the feature point of each matched feature quantity is used.
- the corresponding region estimation unit 305 calculates a geometric transformation parameter (homography matrix) using the feature point matching history, and extracts a region obtained by projecting the region of the search image on the registered image based on the calculated parameter. To do.
- the output region is obtained by using the estimated homography matrix and the region of the search image. Therefore, in addition to the effects of the first embodiment, an output area having a size corresponding to the search image can be stably obtained even when there are many omissions in the feature point match determination.
- the registered image to be collated with the search image is determined (that is, the number of registered images is one). Even when there are a plurality of images, they can be easily expanded.
- the corresponding region determination unit 205 determines an output region for each registered image, and the partial region determined based on the registered image having the largest index indicating the probability of each output region May be selected.
- the configuration and operation of the above-described image partial region extraction apparatus show an example of an implementation method, and the configuration and the order of operations can be changed without departing from the principle of the invention. Further, it is not always necessary to perform the registration process and the search process with the same device. For example, it is possible to configure such that apparatus A performs a part of processing, apparatus B receives an output result of apparatus A via an input / output interface, and apparatus B performs the subsequent processing.
- the information processing apparatus described below defines information (hereinafter referred to as action information) representing information processing to be executed by a target apparatus for each partial region in an image (registered image) registered in advance.
- action information information representing information processing to be executed by a target apparatus for each partial region in an image (registered image) registered in advance.
- the received image may be referred to as a search image.
- the target device may be the information processing device itself, or may be another device different from the information processing device.
- FIG. 17 is a block diagram illustrating an example of an information processing apparatus that extracts a partial region and performs various types of information processing.
- the information processing apparatus includes an image collation unit 41, an action information determination unit 42, and an action information execution unit 43. Note that the information processing apparatus may include an intermediate information storage unit 44.
- the intermediate information storage unit 44 associates information for specifying a partial area in a registered image (hereinafter referred to as partial area information) and information (action information) representing information processing to be executed by a target device. Store information.
- partial area information information in which partial area information and action information are associated with each other
- intermediate information information in which information related to a predetermined partial area is associated with action information.
- the partial area is a rectangular area
- four corner coordinates for specifying the rectangular area are set in the partial area information.
- the horizontal block width and the vertical block width when the registered image is divided by equal division may be used as the partial area information.
- the number of horizontal blocks and the number of vertical blocks when equally dividing the registered image may be used as the partial area information.
- the horizontal resolution, the vertical resolution, the number of divided blocks, and the like of the image can be used as the partial area information, but the contents of the partial area information are not limited to these contents.
- any information can be used as the action information as long as it can identify information processing executed by the target device.
- the action information may be set to information on the content of the information processing itself “execute recording reservation function”.
- Information indicating a specific function such as “display an execution button for execution” may be set.
- action information When performing browser display, it is conceivable to set a URL to be displayed in the action information.
- a file name stored in the information processing apparatus (a file name for saving a moving image, still image, or audio) is set in the action information. Can be considered.
- an operation of an application existing in the information processing apparatus or outside the information processing apparatus, an execution command, or the like may be set.
- the action information execution unit 43 described later may execute an application to be used based on the type of extension. Further, the application to be executed may be associated with the XML tag, and the action information may be a combination of the XML tag and the file name explicitly.
- One action information may be associated from a plurality of pieces of partial area information.
- conditions for causing the action information execution unit 43 to execute information processing may be set. For example, a condition indicating that information processing is to be executed only when a device to be processed is present in a predetermined place (for example, in a store) may be set in the action information.
- the intermediate information storage unit 44 is realized by, for example, a magnetic disk. Further, the information processing apparatus itself may include the intermediate information storage unit 44.
- the image matching unit 41 detects a partial area corresponding to the search image from the registered images. That is, the image collation unit 41 collates the search image with the registered image and detects a partial region in the registered image corresponding to the search image.
- the image matching unit 41 may detect only one partial area or may detect a plurality of partial areas.
- the image matching unit 41 may detect a partial region in the registered image corresponding to the search image using the image partial region extraction device described in the first embodiment or the second embodiment. That is, the image matching unit 41 detects the corresponding feature points by comparing the degree of matching between the registered image feature quantity and the search image feature quantity, and matches the feature points in the search image among the detected corresponding feature points. A partial region in the registered image specified by the corresponding feature point having a higher degree may be extracted.
- the image matching unit 41 may detect a partial region in the registered image corresponding to the search image using a method other than the method described in the first embodiment or the second embodiment. .
- a method described in the first embodiment or the second embodiment is used, even if a search image having a resolution or shooting angle different from that of the registered image is used, a partial region corresponding to the search image is determined from the registered image. Since it can extract, it is more preferable.
- the image collating unit 41 uses a method in which the image partial region extraction device of the first embodiment extracts a partial region. Specifically, the image matching unit 41 calculates the number of corresponding feature points when the registered image feature value matches the search image feature value, and extracts a connection target node from the registered image.
- the image matching unit 41 generates a connected graph from the nodes to be connected, and the four corner coordinates of the circumscribed rectangle of the connected region including the node j (j is an integer) in the connected graph are ( xmin_j (k) , ymin_j ( k) ), ( xmax_j (k) , ymin_j (k) ), ( xmin_j (k) , ymax_j (k) ), ( xmax_j (k) , ymax_j (k) ).
- the image collation unit 41 defines a rectangular region having (x min_min (k) , y min_min (k) ) and (x max_max (k) , y max_max (k) ) as diagonal vertices as the output region.
- K max number of partial regions k is an integer satisfying 1 ⁇ k ⁇ K max.
- the method for extracting the partial area is not limited to the method for extracting the partial area by the image partial area extracting apparatus according to the first embodiment or the second embodiment.
- the image collation unit 41 may automatically designate a partial area by, for example, a known document image layout analysis.
- the action information determination unit 42 uses the information output by the image matching unit 41 as a matching result and intermediate information (that is, partial area information and action information) to determine what information processing the information processing apparatus performs. judge. Specifically, the action information determination unit 42 selects a partial area having the highest degree of matching with the detected area from the partial areas specified by the partial area information, and sets action information corresponding to the partial area. Identify. Details of processing for specifying action information will be described later.
- the action information execution unit 43 executes the action information specified by the action information determination unit 42. Specifically, the action information execution unit 43 causes the target device to execute the processing content according to the specified action information.
- the image collating unit 41, the action information determining unit 42, and the action information executing unit 43 may be realized by a CPU of a computer that operates according to a program (information processing execution program).
- each of the image collation unit 41, the action information determination unit 42, and the action information execution unit 43 may be realized by dedicated hardware.
- FIG. 18 is a flowchart illustrating an operation example of the information processing apparatus according to the present embodiment.
- the operation of the information processing apparatus according to the present embodiment includes an image matching process (step S41), an action information determination process (step S42), and an action information execution process (step S43).
- the image matching unit 41 detects a partial area in the registered image corresponding to the search image (step S41).
- the image collating unit 41 can use the processing performed by the image partial region extraction device according to the first embodiment or the second embodiment.
- FIG. 19 is a flowchart illustrating an exemplary operation for determining action information.
- the process for determining the action information includes an area match score calculation process (step S421) and a partial area specifying process (step S422) for specifying the partial area for which the maximum area match score is calculated.
- the area matching score refers to a partial area specified by the partial area information of the intermediate information (hereinafter sometimes referred to as an intermediate information partial area) and an area detected by the image matching unit 41 (hereinafter referred to as image matching).
- image matching an area detected by the image matching unit 41
- the action information determination unit 42 determines the region match score for all combinations of the partial region (image verification partial region) input from the image verification unit 41 and the partial region (intermediate information partial region) specified by the intermediate information. Is calculated (step S421).
- the image collation partial region k (1 ⁇ k ⁇ K max : where K max is the number of image collation partial regions) and the intermediate information partial region c (1 ⁇ c ⁇ C max : where C max is intermediate
- the region matching degree is defined by, for example, the following Expression 3.
- reg_match (k, c) ((Area of common part of image collation partial region k and intermediate information partial region c) / (Area of the sum area of the image collation partial area k and the intermediate information partial area c)) (Formula 3)
- the action information determination unit 42 obtains values of k and c that maximize reg_match (k, c) (step S422).
- the value of reg_match (k, c) calculated for all combinations of k and c may be held in a storage medium (not shown).
- the values of k and c that maximize reg_match (k, c) are ka and ca, respectively. That is, the action information determination unit 42 selects the intermediate information partial region c that has the highest degree of matching with the image matching partial region k.
- the action information execution unit 43 executes information processing represented by the action information corresponding to the intermediate information partial area ca (step S43).
- the action information execution unit 43 performs the following information processing, for example, according to the action information.
- the action information execution unit 43 activates software such as a browser installed inside or outside the information processing apparatus, and calls the content existing at the specified URL. Also good.
- the action information execution unit 43 performs various operations as necessary. You may start a browser, a viewer, etc., start appropriate software, and call the set file.
- the action information execution unit 43 can display an image or output a sound using the device.
- image display processing, sound output processing, etc. not only the type of image or sound but also information specifying the processing range (for example, the image display range, sound playback / end position, etc.) is associated. If so, the action information execution unit 43 may perform information processing according to the specified information.
- the action information execution unit 43 executes the operation command or the execution command. Also good.
- the action information execution unit 43 may cause the target device to execute information processing when the set condition is satisfied.
- the image collation unit 41 collates the search image with the registered image and detects a region in the registered image corresponding to the search image.
- the action information determination unit 42 identifies a partial area based on the partial area information of the intermediate information, and selects a partial area having the highest degree of matching with the detected area from among the identified partial areas. Since the intermediate information used here associates the partial area information with the action information, the action information determination unit 42 specifies the action information corresponding to the partial area.
- the action information execution part 43 is made to perform the apparatus made into the object about the information processing according to action information. Therefore, when image information indicating a part of the registered image is input, information processing according to the input image information can be executed.
- the image collation unit 41 outputs a partial area (rectangular area) using the method for detecting a partial area by the image partial area extraction apparatus described in the first embodiment or the second embodiment.
- the case has been described as an example.
- the action information determination unit 42 compares the overlapping ratio between the output partial area and the partial area included in the intermediate information.
- the image matching unit 41 may output feature points (connection target nodes) instead of outputting the partial areas.
- the action information determination unit 42 may compare each coordinate value of the connection target node with each intermediate information partial region c and count how many connection target nodes exist in each intermediate information partial region c. . Then, the action information determination unit 42 may select a partial region having the largest count value.
- the counting method is arbitrary. For example, the count value may be increased by 1 for each feature point included in the region, or the feature point to be counted may be specified and counted.
- step SQ205 when the image partial region extraction apparatus described in the first embodiment or the second embodiment uses the method of detecting a partial region, at least steps SQ2052 and SQ2053 in the first embodiment are used. This processing is unnecessary. Further, in the case of increasing by one for each feature point included in the area, the process for determining the corresponding area (the process of step SQ205) is not necessary.
- the case where the registered image to be collated with the search image is determined (that is, the number of registered images is one) has been described as an example, but there may be a case where a plurality of registered images exist. Can be easily expanded.
- the image matching unit 41 calculates the maximum value of the degree of coincidence between the registered image feature quantity and the search image feature quantity (for example, the area match score) for each registered image, and the maximum The partial area may be output from the registered image having the largest value.
- an appropriate registered image to be compared with the search image may be determined using another known image recognition method or image search method, and a partial region may be output from the registered image determined to be appropriate.
- the third embodiment the method of determining which partial area output by the image matching unit 41 is the partial area specified by the intermediate information stored in the intermediate information storage unit 44 has been described. That is, in the third embodiment, the information output by the image matching unit 41 is a partial area, and the intermediate information storage unit 44 stores intermediate information for specifying the partial area.
- the information determination unit 42 identifies action information by comparing the degree of coincidence of both partial areas.
- the intermediate information storage unit 44 stores an identifier for specifying a partial region in the registered image in the intermediate information.
- an identifier for specifying a partial area in the registered image is referred to as a partial area ID.
- the image collation unit 41 outputs a partial region ID corresponding to the detected region when a region corresponding to the search image is detected from the registered image.
- the image matching unit 41 refers to the partial area ID included in the intermediate information, and which partial area ID corresponds to the partial area in the registered image obtained as a result of matching the registered image and the search image. What is necessary is just to provide the mechanism to determine.
- the image collating unit 41 may output the partial area ID. Further, when there are a plurality of partial region IDs identified from the detected partial regions, the image matching unit 41 calculates the degree of area coincidence with each partial region, for example, in the same manner as the method shown in Equation 3 above. The partial area ID of the partial area having a higher degree of matching may be output.
- the action information determination unit 42 can select action information corresponding to the partial area ID (more specifically, partial area information specified by the partial area ID). Associated action information).
- the intermediate information storage unit 44 stores at least one of the image and the feature amount for each partial area ID of the intermediate information, and the image collating unit 41 uses these images or the feature amount to store the image. You may collate. That is, the image collation unit 41 collates the search image and the divided image and outputs a partial region ID corresponding to the detected region in the registered image, and the action information determination unit 42 corresponds to the partial region ID. Action information may be specified.
- each image obtained by dividing the registered image is referred to as a divided image.
- the divided image is an image used for specifying a partial region in the registered image, and may be an image in the same range as the partial region specified by the partial region ID, or may be different.
- the intermediate information storage unit 44 may store an image included in the same area as the partial area specified by the partial area ID as a divided image. Further, the intermediate information storage unit 44 may store a part of the divided images as a divided image. As a method for extracting a specific part, for example, a technique such as a known document image layout analysis can be used. By reducing the image to be registered in this way, the amount of data to be stored can be suppressed.
- the intermediate information storage unit 44 may store an image included in an area obtained by expanding the partial area specified by the partial area ID as a divided image.
- the method of expanding the area is arbitrary, and for example, the partial area may be expanded as a whole.
- the accuracy of matching can be improved by using an image including an area adjacent to the partial area specified by the partial area ID.
- the intermediate information storage unit 44 may store the feature amount used for the collation by the image collation unit 41, similarly to the divided image to be stored. That is, the feature amount of the image included in the same region as the partial region specified by the partial region ID may be stored, and the feature amount of the image included in the region larger or smaller than the partial region is stored. You may keep it.
- the feature amount in this case may be calculated in the same manner as the feature amount used by the image matching unit 41 for matching.
- the image collation unit 41 collates the stored feature amount with the feature amount in the search image, outputs a partial region ID corresponding to the detected region in the registered image, and the action information determination unit 42
- the action information corresponding to the partial area ID may be specified.
- an intermediate information generation unit refers to a divided image or a feature while referring to the image for each partial area included in the intermediate information.
- the amount may be determined, and a partial region ID may be added to the determined divided image or feature amount and automatically stored in the intermediate information storage unit 44.
- the unit of the image or feature quantity to be collated by the image collating unit 41 may be determined in advance, and the unit image or feature quantity may be associated with the partial area of the intermediate information.
- one large registered image is divided into, for example, a plurality of small registered images corresponding to partial areas and stored in the intermediate information storage unit 44, and each image divided by the image collating unit 41 or its image
- the images may be collated using the feature amount calculated from the above. Note that by using the method shown in the first embodiment or the second embodiment, even if a search image having a different resolution or shooting angle from that of the registered image is used, a partial region corresponding to the search image is extracted from the registered image. Since it can be extracted, it is more preferable to specify the position from the entire registered image.
- the intermediate information storage unit 44 may explicitly store the partial area ID in the intermediate information. Further, the order in which the intermediate information is stored in the intermediate information storage unit 44 and the storage order of the files stored as the intermediate information may be used implicitly as the partial area ID. The same applies to an image stored as intermediate information or a partial region ID added to a feature amount.
- the intermediate information storage unit 44 stores at least one action information of action information for executing a recording reservation, action information for executing a video-on-demand viewing request, and action information for executing a video content purchase process. Suppose you are.
- the image collating unit 41 specifies a partial area (that is, a target program) in the registered image.
- the action information determination unit 42 compares the time measured by the timer in the information processing apparatus with the broadcast time of the target program in the specified partial area.
- the action information determination unit 42 may determine to display an operation screen for changing the channel. Further, when the broadcast time of the target program is later than the measured time, the action information determination unit 42 may determine to display an operation screen for reserving recording of the target program. Further, when the broadcast time of the target program is later than the measured time, the action information determination unit 42 may determine to display an operation screen for making a video-on-demand viewing request, or the video content of the target program It may be determined that an operation screen for purchasing or inquiring is displayed.
- the action information includes the comparison process between the clock time and the broadcast time and the information processing contents based on the comparison result.
- the action information execution unit 43 executes information processing according to the action information determined by the action information determination unit 42.
- the image matching unit 41 identifies a partial area (that is, a target article) in the registered image.
- the action information determination unit 42 determines that the partial area includes a blocked article, the action information determination unit 42 specifies action information corresponding to the partial area information representing the partial area.
- the action information determination unit 42 may determine to display a screen indicating that the user has read-out data.
- the action information execution unit 43 displays a screen, and when the user instructs execution of the displayed action (reading), the voice information is started to read the article.
- the action information determination unit 42 may determine to display a screen indicating that the user has a moving image. In this case, the action information execution unit 43 reproduces the moving image.
- a link to a shopping site may be set in the action information.
- the action information determination unit 42 determines to display a screen showing a link to the shopping site, and the action information execution unit 43 activates the browser and displays the shopping site. You may make it display.
- information processing for browsing the contents of a book can be set as action information.
- the terminal device is assumed to be connected in a wireless area in the store or facility, and the action information is set to indicate that processing according to the action information can be executed only in the wireless area. It is assumed that
- the image collation unit 41 identifies a partial area in the registered image.
- the action information determination unit 42 determines that the partial area includes a part of a book cover or spine cover
- the action information determination unit 42 specifies action information corresponding to the partial area information representing the partial area.
- the action information determination unit 42 may determine that a book is specified from the image and the contents of the book are displayed.
- the action information execution unit 43 may display the contents of the book only when the terminal is in the wireless area.
- the action information execution unit 43 may be able to display the contents of the book even when the terminal leaves the wireless area.
- the image collating unit 41 specifies a partial area (that is, sightseeing spot information) in the registered image.
- the action information determination unit 42 determines that the partial area includes a blocked article
- the action information determination unit 42 specifies action information corresponding to the partial area information representing the partial area.
- the action information determination unit 42 may determine to present the sightseeing spot information.
- the action information execution unit 43 performs processing for displaying the registered sightseeing spot information on the screen and reproducing the moving image data.
- FIG. 20 is a block diagram showing an outline of the image partial region extraction apparatus of the present invention.
- An image partial region extraction apparatus is an image partial region extraction device that extracts a partial region corresponding to a search image from a registered image, and has a predetermined value for each feature point (for example, a feature point of interest) of the registered image.
- Correspondence that is a feature point included in the feature point arrangement in the registered image corresponding to a feature point included in the feature point arrangement in the search image by comparing the degree of coincidence between the search image feature quantity and the registered image feature quantity
- the point detection unit 82 (for example, the corresponding feature point detection unit 204) is identified by the corresponding feature point having a higher degree of matching (for example, the highest degree of matching) with the feature point in the search image among the corresponding feature points.
- a corresponding area determination unit 83 (for example, a corresponding area determination unit 205) that extracts a partial area in the registered image.
- a partial region corresponding to the search image can be extracted from the registered image even if a search image having a resolution or shooting angle different from that of the registered image is used.
- the corresponding feature point detection unit 82 when it is determined that the search image feature value matches the registered image feature value, the feature point in the feature point arrangement from which the search image feature value is calculated The number of times of matching may be calculated for each corresponding feature point. Then, the corresponding area determination unit 83 may extract a partial area in the registered image specified by the connection target node, which is a corresponding feature point having a predetermined number of matches or more.
- the accuracy of extracting partial areas can be increased by using feature points with a higher number of matches.
- the feature amount storage unit 81 sets a feature point set selected from a connected region, which is a region obtained by connecting pixels adjacent to each other among pixels determined to belong to the same color, as a feature point arrangement.
- a registered image feature quantity calculated based on the feature point arrangement may be stored.
- the corresponding region determination unit 83 selects another connection target node within a predetermined distance from the connection target node (for example, connecting feature points with edges), and selects the connection region including the selected connection target node.
- the specified partial area may be extracted.
- the accuracy of extracting the partial regions can be further increased.
- the corresponding feature point detection unit 82 uses the feature points of the matched feature amounts when it is determined that the search image feature amount and the registered image feature amount match.
- a feature point matching history that is a combination of feature points in the search image and feature points in the registered image may be generated.
- the corresponding region determination unit 83 calculates a parameter (for example, a homography matrix) for geometrically converting the search image into a registered image using the feature point matching history, and uses the calculated parameter as the calculated parameter.
- a parameter for example, a homography matrix
- an output region having a size corresponding to the search image can be stably obtained even when there are many omissions in the feature point match determination.
- the corresponding region determination unit 83 has a higher probability (for example, an evaluation value) of converting a search image into a registered image among parameters calculated using a combination of feature points included in the feature point matching history. And a region obtained by projecting the region of the search image onto the registered image based on the selected parameter may be extracted as a partial region.
- the image partial region extraction device includes a feature point extraction unit (for example, a search image feature point generation unit Q201) that extracts feature points from a search image, and one or more feature point arrangements based on the extracted feature points.
- a feature point arrangement generation unit for example, a search image feature point arrangement generation unit Q202
- a search image feature amount generation unit for example, a search image feature amount generation unit
- the corresponding feature point detection unit 82 may detect the corresponding feature point by comparing the search image feature quantity with the registered image feature quantity.
- An image partial region extraction apparatus that extracts a partial region corresponding to a search image from a registered image, and the feature points of interest ordered according to a predetermined rule for each feature point of interest of the registered image
- a feature amount storage unit that stores a registered image feature amount that is a feature amount that is invariant to a geometric transformation calculated based on a feature point arrangement that is a set of feature points that exist in the vicinity of the feature point, and attention in the search image
- a corresponding feature point detecting unit that detects a corresponding feature point that is a feature point included in the feature point arrangement in the registered image corresponding to the feature point included in the feature point arrangement; More matching with feature points Image portion area extraction apparatus characterized by comprising a corresponding area determining section for extracting a partial region in the registered image specified by have corresponding feature point.
- the corresponding feature point detection unit matches the feature point in the feature point arrangement from which the search image feature value is calculated.
- the number of matching is calculated for each corresponding feature point, and the corresponding region determination unit extracts a partial region in the registered image specified by the connection target node that is the corresponding feature point having the number of matching equal to or greater than a predetermined number of times.
- the image partial region extraction apparatus according to 1.
- storage part makes the feature point arrangement
- the registered image feature amount calculated based on the feature point arrangement is stored, and the corresponding region determination unit selects another connection target node within a predetermined distance from the connection target node, and selects the selected connection target node.
- the image partial region extraction device according to supplementary note 2, wherein a partial region specified from the connected region is extracted.
- the corresponding feature point detection unit registers the feature point in the search image using the feature point of each matched feature quantity.
- a feature point matching history that is a combination of feature points in the image is generated, and the corresponding region determination unit calculates a parameter for geometrically converting the search image into a registered image using the feature point matching history, and based on the calculated parameter.
- the corresponding region determination unit selects and selects a parameter having a higher probability of converting the search image into the registered image from the parameters calculated using the combination of the feature points included in the feature point matching history.
- the image partial region extraction device according to supplementary note 4, wherein a region obtained by projecting the region of the search image onto the registered image is extracted as a partial region based on the parameters obtained.
- generation part which produces
- positioning A search image feature value generation unit that generates a search image feature value, and the corresponding feature point detection unit detects the corresponding feature point by comparing the search image feature value with the registered image feature value.
- the image partial region extraction device according to any one of 5.
- An image partial region extraction method for extracting a partial region corresponding to a search image from a registered image, wherein the feature points of interest ordered according to a predetermined rule for each feature point of interest of the registered image A registered image feature amount that is a feature amount that is invariant to the geometric transformation calculated based on a feature point arrangement that is a set of feature points existing in the vicinity of the feature point, and the feature point for each feature point of interest in the search image A registered image corresponding to a feature point included in the feature point arrangement in the search image by comparing the degree of coincidence with the search image feature quantity that is a feature quantity invariant to the geometric transformation calculated based on the arrangement A corresponding feature point that is a feature point included in the middle feature point arrangement is detected, and among the corresponding feature points, in the registered image identified by the corresponding feature point having a higher degree of matching with the feature point in the search image Extracting partial areas Image part region extraction method characterized.
- An image partial region extraction program applied to a computer for extracting a partial region corresponding to a search image from a registered image, the computer according to a predetermined rule for each feature point of interest of the registered image
- a registered image feature quantity that is a feature quantity that is invariant to the geometric transformation calculated based on a feature point arrangement that is a set of feature points existing in the vicinity of the feature point of interest that is ordered based on
- the degree of coincidence with the search image feature amount which is a feature amount that is invariant to the geometric transformation calculated based on the feature point arrangement, is compared with the feature point arrangement in the search image.
- a corresponding feature point detection process for detecting a corresponding feature point corresponding to a feature point included in the registered feature point arrangement in the registered image, and a feature point in the search image among the corresponding feature points;
- Image part region extraction program for matching degree to execute the corresponding region determination process of extracting a partial region in the registered image which is specified by a higher corresponding feature point.
- a computer is characterized by a set of feature points selected from a connected region that is a region obtained by connecting pixels adjacent to each other among pixels determined to belong to the same color in the corresponding feature point detection process. If it is determined that the registered image feature value calculated based on the feature point arrangement matches the search image feature value, the feature point arrangement from which the search image feature value is calculated is determined. The number of matches that match a feature point is calculated for each corresponding feature point, and in the corresponding region determination process, another connection target node within a predetermined distance from the connection target node is selected, and the selected connection target node is included. 15.
- the feature in the search image is used using the feature point of each matched feature amount.
- a feature point matching history that is a combination of a point and a feature point in the registered image is generated, and a parameter for geometrically converting the search image into a registered image is calculated using the feature point matching history in the corresponding region determination process.
- the computer selects a parameter having a higher probability of converting the search image into the registered image.
- the program for extracting an image partial region according to supplementary note 16, wherein a region obtained by projecting the region of the search image on the registered image based on the selected parameter is extracted as a partial region.
- Feature point extraction processing for extracting feature points from a search image in a computer, feature point location generation processing for generating one or more feature point locations based on the extracted feature points, and the feature points Additional processing for executing search image feature value generation processing for generating a search image feature value based on arrangement, and detecting corresponding feature points by comparing the search image feature values with registered image feature values in corresponding feature point detection processing
- the program for extracting an image partial area according to any one of 13 to appendix 17.
- the present invention can be applied to general information processing apparatuses that perform information processing using content photographed by a user from various magazines as a key.
- the above-described information processing apparatus can be applied to an apparatus for searching digital information from objects (paper, boards, etc.) in the real world in which characters such as articles in newspapers and magazines, various advertisements, and explanatory materials are described. is there.
- the above-described information processing apparatus can be applied to an apparatus that performs recording reservation, video viewing, video content purchase processing, and the like using a television guide magazine.
- the above-described image partial area extracting device can be applied to a device for identifying a mail item from an image of a mail item, a courier service or the like. Specifically, it is also possible to recognize an address area and a sender area recorded on a postal item, a courier service, etc. using an image partial area extraction device.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
L'invention concerne notamment un dispositif caractérisé en ce que, pour chaque point caractéristique d'intérêt dans une image enregistrée, une unité (81) de stockage de valeurs caractéristiques mémorise une valeur caractéristique d'image enregistrée qui est rangée sur la base d'une règle prescrite et est invariante par une conversion géométrique calculée sur la base de l'agencement des points caractéristiques, c.-à-d. l'ensemble de points caractéristiques à proximité du point caractéristique d'intérêt. Une unité (82) de détection de points caractéristiques correspondants compare les degrés de coïncidence entre la valeur caractéristique d'image enregistrée et des valeurs caractéristiques d'image de recherche invariantes par une conversion géométrique calculée sur la base de l'agencement des points caractéristiques pour chaque point caractéristique d'attention dans une image de recherche, et détecte un point caractéristique correspondant, qui est un point caractéristique correspondant à un point caractéristique contenu dans l'agencement des points caractéristiques de l'image de recherche et qui est contenu dans l'agencement des points caractéristiques de l'image enregistrée. Une unité (83) de détermination de régions correspondantes extrait une sous-région de l'image enregistrée spécifiée par le point caractéristique correspondant qui présente le plus haut degré de coïncidence avec le point caractéristique de l'image de recherche.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014541924A JPWO2014061221A1 (ja) | 2012-10-18 | 2013-10-02 | 画像部分領域抽出装置、画像部分領域抽出方法および画像部分領域抽出用プログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012-230885 | 2012-10-18 | ||
JP2012230885 | 2012-10-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014061221A1 true WO2014061221A1 (fr) | 2014-04-24 |
Family
ID=50487801
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/005877 WO2014061221A1 (fr) | 2012-10-18 | 2013-10-02 | Dispositif d'extraction de sous-régions d'images, procédé d'extraction de sous-régions d'images et programme pour l'extraction de sous-régions d'images |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2014061221A1 (fr) |
WO (1) | WO2014061221A1 (fr) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123542A (zh) * | 2014-07-18 | 2014-10-29 | 大连理工大学 | 一种轮毂工件定位的装置及其方法 |
CN105243661A (zh) * | 2015-09-21 | 2016-01-13 | 成都融创智谷科技有限公司 | 一种基于susan算子的角点检测方法 |
US20220012213A1 (en) * | 2016-03-08 | 2022-01-13 | International Business Machines Corporation | Spatial-temporal storage system, method, and recording medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6759175B2 (ja) * | 2017-10-27 | 2020-09-23 | 株式会社東芝 | 情報処理装置および情報処理システム |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000215317A (ja) * | 1998-11-16 | 2000-08-04 | Sony Corp | 画像処理方法及び画像処理装置 |
JP2005100407A (ja) * | 2003-09-24 | 2005-04-14 | Seiko Epson Corp | 複数のソース画像からパノラマ画像を作成するシステム及び方法 |
WO2008066152A1 (fr) * | 2006-11-30 | 2008-06-05 | Nec Corporation | Dispositif, procédé et programme de génération de valeur caractéristique d'image de document |
WO2009060975A1 (fr) * | 2007-11-08 | 2009-05-14 | Nec Corporation | Dispositif de vérification d'agencement de points caractéristiques, dispositif de vérification d'image, procédé pour celui-ci et programme |
WO2009081866A1 (fr) * | 2007-12-26 | 2009-07-02 | Nec Corporation | Dispositif de mise en correspondance de caractéristiques entre motifs, procédé de mise en correspondance de caractéristiques entre motifs utilisé pour celui-ci, et programme s'y rapportant |
WO2010053109A1 (fr) * | 2008-11-10 | 2010-05-14 | 日本電気株式会社 | Dispositif de mise en correspondance d'images, procédé de mise en correspondance d'images et programme de mise en correspondance d'images |
-
2013
- 2013-10-02 WO PCT/JP2013/005877 patent/WO2014061221A1/fr active Application Filing
- 2013-10-02 JP JP2014541924A patent/JPWO2014061221A1/ja active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000215317A (ja) * | 1998-11-16 | 2000-08-04 | Sony Corp | 画像処理方法及び画像処理装置 |
JP2005100407A (ja) * | 2003-09-24 | 2005-04-14 | Seiko Epson Corp | 複数のソース画像からパノラマ画像を作成するシステム及び方法 |
WO2008066152A1 (fr) * | 2006-11-30 | 2008-06-05 | Nec Corporation | Dispositif, procédé et programme de génération de valeur caractéristique d'image de document |
WO2009060975A1 (fr) * | 2007-11-08 | 2009-05-14 | Nec Corporation | Dispositif de vérification d'agencement de points caractéristiques, dispositif de vérification d'image, procédé pour celui-ci et programme |
WO2009081866A1 (fr) * | 2007-12-26 | 2009-07-02 | Nec Corporation | Dispositif de mise en correspondance de caractéristiques entre motifs, procédé de mise en correspondance de caractéristiques entre motifs utilisé pour celui-ci, et programme s'y rapportant |
WO2010053109A1 (fr) * | 2008-11-10 | 2010-05-14 | 日本電気株式会社 | Dispositif de mise en correspondance d'images, procédé de mise en correspondance d'images et programme de mise en correspondance d'images |
Non-Patent Citations (1)
Title |
---|
TATSUO AKIYAMA ET AL.: "An Identification Method for European Printed Address Images Using Affine Invariants", IEICE TECHNICAL REPORT, vol. 107, no. 491, 14 February 2008 (2008-02-14), pages 7 - 12 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123542A (zh) * | 2014-07-18 | 2014-10-29 | 大连理工大学 | 一种轮毂工件定位的装置及其方法 |
CN104123542B (zh) * | 2014-07-18 | 2017-06-27 | 大连理工大学 | 一种轮毂工件定位的装置及其方法 |
CN105243661A (zh) * | 2015-09-21 | 2016-01-13 | 成都融创智谷科技有限公司 | 一种基于susan算子的角点检测方法 |
US20220012213A1 (en) * | 2016-03-08 | 2022-01-13 | International Business Machines Corporation | Spatial-temporal storage system, method, and recording medium |
Also Published As
Publication number | Publication date |
---|---|
JPWO2014061221A1 (ja) | 2016-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11861888B2 (en) | Logo recognition in images and videos | |
US9076069B2 (en) | Registering metadata apparatus | |
US10650264B2 (en) | Image recognition apparatus, processing method thereof, and program | |
WO2014061222A1 (fr) | Dispositif de traitement d'informations, méthode de traitement d'informations et programme de traitement d'informations | |
JP6278276B2 (ja) | 物体識別装置、物体識別方法、及びプログラム | |
WO2021110174A1 (fr) | Procédé et dispositif de reconnaissance d'image, dispositif électronique et support de stockage | |
WO2007130688A2 (fr) | Dispositif informatique mobile à capacité d'imagerie | |
WO2014061221A1 (fr) | Dispositif d'extraction de sous-régions d'images, procédé d'extraction de sous-régions d'images et programme pour l'extraction de sous-régions d'images | |
JP2013109773A (ja) | 特徴マッチング方法及び商品認識システム | |
US20130322758A1 (en) | Image processing apparatus, image processing method, and program | |
US20080037904A1 (en) | Apparatus, method and program storage medium for image interpretation | |
US20130100296A1 (en) | Media content distribution | |
JP5767887B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
JP6304815B2 (ja) | 画像処理装置ならびにその画像特徴検出方法、プログラムおよび装置 | |
JP5278093B2 (ja) | 記事関連情報提供方法、装置、プログラム、記録媒体 | |
US11798210B2 (en) | Neural network based detection of image space suitable for overlaying media content | |
Swaminathan et al. | Localization based object recognition for smart home environments | |
Kaur et al. | Image Matching Techniques: A Review | |
JP5967036B2 (ja) | 画像検索システム、情報処理装置及びプログラム | |
US20230144394A1 (en) | Systems and methods for managing digital notes | |
JP2010262578A (ja) | 帳票辞書生成装置、帳票識別装置、帳票辞書生成方法、及びプログラム | |
JP2016224884A (ja) | 情報処理装置、及びプログラム | |
CN115460456A (zh) | 数字内容添加的目标区域提取 | |
CN104866494B (zh) | 一种信息处理方法及电子设备 | |
WO2023220172A1 (fr) | Systèmes et procédés d'ingestion et de traitement de contenu enrichissable |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13847547 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014541924 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13847547 Country of ref document: EP Kind code of ref document: A1 |