WO2013154062A1 - Image recognition system, image recognition method, and program - Google Patents

Image recognition system, image recognition method, and program Download PDF

Info

Publication number
WO2013154062A1
WO2013154062A1 PCT/JP2013/060564 JP2013060564W WO2013154062A1 WO 2013154062 A1 WO2013154062 A1 WO 2013154062A1 JP 2013060564 W JP2013060564 W JP 2013060564W WO 2013154062 A1 WO2013154062 A1 WO 2013154062A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour
partial
image identification
information
feature point
Prior art date
Application number
PCT/JP2013/060564
Other languages
French (fr)
Japanese (ja)
Inventor
雅文 矢野
雄馬 松田
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Publication of WO2013154062A1 publication Critical patent/WO2013154062A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Definitions

  • the present invention relates to a system for recognizing or searching a general object or shape included in image information, and more particularly to a method for identifying a target using silhouette information of the object or shape.
  • Non-Patent Document 1 Since it is known that the curvature information of contours of similar objects are similar, the above technique using the curvature information can calculate the similarity even for contours with slightly different contour shapes. Is possible. Further, from the silhouette information of the object shape, it is possible to extract not only the curvature information but also various feature quantities related to the object shape such as inflection points that are the characteristic points of curvature and the position coordinates of the contour (non- Patent Document 1). While silhouette information can be used to extract various features of object outlines, the object identification process from silhouettes can be easily separated when multiple objects are adjacent and silhouettes are combined. There is a problem that it cannot be performed (Non-Patent Document 2).
  • Non-Patent Document 2 points out a problem that when handwritten characters are adjacently combined, it is not possible to determine where the combined points are.
  • a coping method using a learned coupling pattern has been proposed when analyzing silhouette coupling. For example, after restricting the number of characters to be combined to two characters, learning is performed in advance for a combination pattern of two characters, and processing such as cutting the silhouette at the point where the learned combination pattern appears is performed. Thereafter, the target is identified using the divided silhouette.
  • problems when using such learned coupling patterns. For example, there is a problem that cannot be dealt with when the number of connected characters (the number of objects) is unknown.
  • JP 10-055447 A PCT / JP2011 / 075599 Japanese Patent Application No. 2011-268907
  • a contour extraction step for extracting a contour of an object or shape from target image data to be detected, a feature point detection step for detecting a feature point group from the extracted contour, and a feature point using the detected feature point information
  • a contour generation step for generating a partial contour as a partial contour, and selecting one or more of the generated partial contours and performing contour information matching processing
  • an image identifying method for identifying and processing an object or shape.
  • the image identification system, its method, and program which perform general object recognition of the object or shape contained in this silhouette information based on silhouette information can be provided.
  • FIG. 1 is a configuration diagram of an image identification system according to the first embodiment.
  • FIG. 2 is an explanatory diagram illustrating an example of an object shape to be subjected to pattern recognition.
  • FIG. 3 is an explanatory diagram showing an example of contour information in silhouette information.
  • FIG. 4 is an explanatory diagram showing an example of feature points on the contour.
  • FIG. 5 is an explanatory diagram showing an example of the selected partial contour.
  • FIG. 6 is an explanatory diagram illustrating an example of an object shape stored in the storage unit.
  • FIG. 7 is a flowchart showing an example of the operation of the image identification system in the first embodiment.
  • FIG. 8 is a configuration diagram of an image identification system according to the second embodiment.
  • FIG. 9 is an explanatory diagram illustrating an example of a method for reconstructing a contour.
  • FIG. 10 is a flowchart illustrating an example of the operation of the image identification system according to the second embodiment.
  • FIG. 11 is a configuration diagram of an image identification system according to the third embodiment.
  • FIG. 12 is an explanatory diagram illustrating an example of a method for correcting a contour between feature points.
  • FIG. 13 is an explanatory diagram illustrating an example of an object shape to be subjected to pattern recognition.
  • FIG. 14 is an explanatory diagram illustrating an example of the complemented contour.
  • FIG. 15 is a flowchart illustrating an example of the operation of the image identification system according to the third embodiment.
  • the processing operation of the system can be shown in four steps.
  • the first step the entire outline that can be acquired from the silhouette information of an object including one or more objects is extracted.
  • characteristic points on the extracted contour are detected.
  • the contour is divided into one or a plurality of parts based on the characteristic points extracted in the second step, and a partial contour is generated.
  • an object is specified by collating the generated partial contour information and database information.
  • FIG. 1 is a block diagram illustrating a configuration of a recognition system according to an embodiment.
  • the image identification system includes a control unit 10 that manages the entire information processing, an image information acquisition unit 20 that acquires image data to be detected, an image information storage unit 30 that stores acquired image data, and the like.
  • the image recognition system includes a contour extraction unit 101, a feature point detection unit 102, a partial contour generation unit 103, and a partial contour matching unit 104 as the image identification unit 100 according to the present invention.
  • the control unit 10 manages the overall operation of information processing related to image identification. In the image information acquisition unit 20, image data specified by the user such as a moving image or a photograph is taken into the system and stored in the image information storage unit 30.
  • the image information specified by the user may be acquired as it is, or may be acquired by performing conversion that facilitates subsequent processing, such as black and white conversion using luminance information or the like.
  • An example of the image information acquired by the image information acquisition unit 20 is illustrated in FIG. It is also possible to automatically collect image information from a moving image or the like at an arbitrary interval or the like.
  • the image information storage unit 30 stores acquired image data, results obtained by collation, intermediate data (closed contour shape, complementary contour, feature point coordinates, etc.) and the like as necessary.
  • the image information storage unit 30 may be configured by a memory, an HDD, or a storage unit. In the contour information storage unit 40, data used for inquiry is extracted and stored.
  • the collation result output unit 50 outputs the result obtained by the image identification unit 100. For example, together with the collation result acquired from the image information storage unit 30, the target object or shape of the collation result recorded in the contour information storage unit 40 is output to a monitor or the like. Note that any information may be output as the output form.
  • the contour extraction unit 101 extracts, as contour information (silhouette information), all or part of the contour that can be acquired from the image data to be detected that is acquired by the image information acquisition unit 20 and recorded in the image information storage unit 30. To do.
  • the contour extraction unit 101 extracts, from the target image data, for example, points where the hue, saturation, brightness, and the like change rapidly using a Laplacian / Gaussian filter.
  • the point of sudden change here may be determined by whether differential values such as hue, saturation and brightness exceed a predetermined threshold.
  • the method of extracting the contour information is not limited to the exemplified method.
  • the contour of the extracted object may be expressed as a collection of contour points, for example, and each point may be represented by (x, y) or the like using a Cartesian coordinate system.
  • contour information extracted by the contour extraction unit 101 is shown in FIG.
  • the outer contour as shown in FIG. 3 is extracted.
  • the A plurality of contours may be obtained from the target image data. What is necessary is just to perform the process demonstrated below with respect to this acquired each outline separately. In the following description, it is assumed that only one contour has been acquired.
  • the feature point detection unit 102 detects a characteristic point (a feature point group) on the contour to be detected. As this feature point, for example, a point at which the value of the curvature k (t) defined by the following formula (1) becomes zero, that is, an inflection point is used.
  • the curvature k (t) is the contour coordinate t taken so as to go around the contour starting from an arbitrary point on the contour, and x, x when the contour is expressed in an orthogonal coordinate system (x, y coordinate system) It is defined by a primary differential value and a secondary differential value with respect to t of y (see formula (1)).
  • the result of detecting the feature points defined by this method from the contour shown in FIG. 3 is indicated by a circle in FIG. Note that the method of detecting the feature points is not limited to the above. For example, a method may be considered in which the point at which the value of the curvature k (t) described above changes abruptly is defined as a “connection point” and the connection point is adopted as a feature point.
  • the point of sudden change here is determined by, for example, whether or not the differential value of the curvature exceeds a predetermined threshold value.
  • connection point is not limited to the above definition, and any definition may be used as long as it is a characteristic point at which an object connects.
  • the partial contour generation unit 103 divides the contour into one or a plurality of regions using the feature point information detected by the feature point detection unit 102, and generates each divided region as a partial contour. For the partial contour, any means using the feature point information may be used, but when two or more feature points are detected, it is desirable to generate a region having the feature points as both ends as the partial contour.
  • the partial contour matching unit 104 selects one or a plurality of partial contours from the partial contours generated by the partial contour generation unit 103, and the selected one or a plurality of partial contours are recorded in the contour information storage unit 40.
  • the object is identified by referring to the inquiry data.
  • selecting partial contours it is desirable to select continuous partial contours.
  • Many existing methods can be used for the matching process. For example, collation by comparison of feature amounts for each partial contour, collation using object positions using all selected partial contours, and the like can be mentioned.
  • FIG. 7 is a flowchart showing an example of the operation of the present embodiment.
  • the image information acquisition unit 20 acquires target image data designated by the user and records it in the image information storage unit 30 (S1001).
  • the acquisition of image information is not limited to the one specified by the user, and the system may acquire it automatically or semi-automatically.
  • the contour extraction unit 101 extracts the contour information (silhouette information) of the object or shape from the target image data to be detected recorded in the image information storage unit 30 (S1002).
  • the contour information can be extracted only when the user designates a reference in advance, such as a pixel having a luminance equal to or higher than a threshold, and satisfies the reference.
  • the feature point detection unit 102 detects a characteristic point on the contour and detects it as a feature point group (S1003).
  • the partial contour generation unit 103 divides the contour into one or a plurality of regions using the feature point information detected by the feature point detection unit 102, and generates each divided region as a partial contour ( S1004).
  • the partial contour matching unit 104 selects one or more contours from the generated partial contours using the contour information (silhouette information) extracted from the target image data and the corrected complementary contour. (S1005)
  • an object matching process is performed (S1006).
  • the partial contour is selected again (S1005), and the same processing is performed (S1007).
  • the control unit 10 may perform processing for changing the above processing and parameters as necessary. If processing is performed under different conditions, the accuracy can be improved. Thereafter, the control unit 10 outputs a collation result from the collation result output unit 50.
  • the image identification system By operating the image identification system in this way, it can be expected that recognition of a single object included in the silhouette can be efficiently acquired from silhouette information in which the number of included objects is not clear.
  • each unit of the image identification system may be realized by using a combination of hardware and software. In a form in which hardware and software are combined, an image identification program is developed in the RAM, and hardware such as a control unit (CPU) is operated based on the program, thereby realizing each unit as various means. Further, the program may be recorded on a storage medium and distributed.
  • the program recorded on the recording medium is read into a memory via a wired, wireless, or recording medium itself, and operates a control unit or the like.
  • the recording medium include an optical disk, a magnetic disk, a semiconductor memory device, and a hard disk.
  • an information processing apparatus that operates as an image identification system is based on an image identification program developed in a RAM, contour extraction means, feature point detection means, feature point pair generation means, It can be realized by operating the control unit as a complementary contour selection unit and a contour collation unit.
  • FIG. 8 is a block diagram illustrating a configuration of a recognition system according to an embodiment.
  • the contour information reconstruction unit 111 reconstructs a partial contour by a method of cutting or combining necessary partial contours with respect to the partial contour generated by the partial contour generation unit 103. Any method may be used as the reconstruction method at this time, but it is preferable to refer to the already stored contour information of the object recorded in the contour information storage unit 40.
  • FIG. 9 An example of a technique adopted by the contour information reconstruction unit 111 using the silhouette information of FIG. 2 will be described using FIG. 9.
  • a partial outline generated and selected by the feature point detection unit 102, the partial outline generation unit 103, and the partial outline matching unit 104 is shown in (b).
  • the partial outline corresponding to the lower partial outline is selected as shown in (b) upper part of the partial outline of the shape shown in (a) upper part included in the outline information storage unit 40.
  • the partial contour not selected is determined as shown in (c).
  • the partial contour in the lower shape corresponding to this may be composed of two partial contours in the lower portion as the partial contour adjacent to the partial contour selected in (b).
  • This reconstruction can be performed by using geometric information such as the length and curvature of the partial contour shown in (c).
  • (d) shows the result of reconstruction using the length information of the partial contour shown in (c).
  • the contour information is reconstructed by the series of processes described above. [Description of operation] Next, an operation example of the embodiment will be described.
  • FIG. 10 is a flowchart showing an example of the operation of the present embodiment. The description of operations that are not related to the present invention will be omitted.
  • the contour information reconstruction unit 111 cuts or joins the necessary partial contours to the partial contours generated by the partial contour generation unit 103 (S1101).
  • the partial contour matching unit 104 selects one or a plurality of contours from the generated partial contours using the contour information (silhouette information) extracted from the target image data and the corrected complementary contour (S1005). Then, the object matching process is performed by comparing the selected partial contour with the contour information stored in the contour information storage unit 40 (S1006). When the desired information cannot be obtained when comparing the contour information storage unit 40, the contour information is reconstructed (S1101) and the partial contour is selected (S1005), and the same processing is performed (S1007).
  • control unit 10 may perform processing for changing the above processing and parameters as necessary. If processing is performed under different conditions, the accuracy can be improved. Thereafter, the control unit 10 outputs a collation result from the collation result output unit 50.
  • the specific configuration of the present invention is not limited to the above-described embodiment, and changes within a range not departing from the gist of the present invention are included in the present invention.
  • FIG. 11 is a block diagram illustrating a configuration of a recognition system according to an embodiment. In addition, about the structure with little connection with this invention, description is simplified or abbreviate
  • the contour complementing unit 121 supplements necessary contour information with respect to the contour extracted by the contour extracting unit 101.
  • any method may be used as the complementing method at this time, but it is desirable to perform complementation using the feature points detected by the feature point detecting unit 102 as described below.
  • a complementing method using the feature points detected by the feature point detection unit 102 will be described.
  • the two points may be two arbitrary points, but it is desirable to select an appropriate one in consideration of a geometric condition or the like. For example, the condition that the angle between the two tangents on the two points is less than the threshold, or the curvature value becomes extremely discontinuous after complementation (for example, it becomes discontinuous by switching between positive and negative). It is desirable to select on the condition that there is not.
  • any correction method may be adopted for the correction here, and an approximation by a linear curve (straight line) adopted in the first small step may be adopted.
  • a method of using an angle formed by a tangent line will be described with reference to FIG. First, the angle formed by the tangent line between the point 10001 and the point 10002 is measured as 10003. Next, a line segment length 10004 having both ends of the point 10001 and the point 10002 is derived as a distance between two points of the point 10001 and the point 10002.
  • the representative point of the circular arc is 10005
  • the distance 10006 between the point 10001 and the point 10005 is a radius when the small area is an arc. It matches (curvature radius).
  • the values of angles 10003 and 10007 are equal.
  • the radius of curvature 10006 can be derived from equation (2). Thereby, it is possible to extract a contour as shown by a bold line in FIG. 14 from the image information as shown in FIG. Contour information can be collated by operating the partial contour collating unit 104 for the contour thus reconstructed.
  • FIG. 15 is a flowchart showing an example of the operation of the present embodiment. The description of operations that are not related to the present invention will be omitted.
  • the contour complementing unit 121 supplements the contour extracted by the contour extracting unit 101 after S1003 (S1201).
  • the partial contour generation unit 103 divides the contour into one or a plurality of regions using the feature point information detected by the feature point detection unit 102, and generates each divided region as a partial contour ( S1004).
  • the contour information reconstruction unit 111 reconstructs the partial contour by cutting or combining the necessary partial contours with the partial contour generated by the partial contour generation unit 103 (S1101). .
  • the partial contour matching unit 104 selects one or a plurality of contours from the generated partial contours using the contour information (silhouette information) extracted from the target image data and the corrected complementary contour (S1005). Then, the object matching process is performed by comparing the selected partial contour with the contour information stored in the contour information storage unit 40 (S1006). If the desired object is not obtained when the contour information storage unit 40 is compared, the steps after S1201 are performed again, and the same processing is performed (S1007). As described above, by implementing this embodiment, it is possible to collate an object even when the information included in the contour information is not sufficient.
  • the processing of the embodiment may be executed by a computer-readable storage medium encoded with a program, software, or an instruction that can be executed by a computer.
  • the storage medium includes not only a portable recording medium such as an optical disk, a floppy (registered trademark) disk, and a hard disk, but also a transmission medium that temporarily records and holds data such as a network.
  • the specific configuration of the present invention is not limited to the above-described embodiment, and changes within a range not departing from the gist of the present invention are included in the present invention.
  • Control unit 20 Image information acquisition unit (image information acquisition means) 30 Image information storage unit (target image, result) 40 Contour information storage (inquiry data) 50 Verification result output unit (Verification result output means) 100 Image identification unit (image identification means) 101 Contour extraction unit (contour extraction means) 102 feature point detection unit (feature point detection means) 103 Partial contour generation unit (partial contour generation means) 104 Partial contour matching unit (partial contour matching means) 111 Outline information reconstruction unit (contour information reconstruction means) 121 Contour complement part (contour complement means) 10001: A point of a feature point pair 10002: A point of a feature point pair 10003: An angle formed by a tangent line of the feature point pair 10004: A line segment length between the feature point pairs 10005: An arc center point 10006 when an arc is formed between the feature point pairs : Line segment with both ends of arc center point and each point of feature point pair 10007: Angle formed by both ends of feature point pair with arc center point as center

Abstract

The present invention performs generic object recognition on the basis of silhouette information containing multiple objects. The invention implements an image recognition method for performing recognition processing on an object or a shape, said method characterized in comprising: a contour extraction step of extracting a contour of an object or a shape on the basis of object image data as an object of detection; a feature point detection step of detecting a feature point group on the basis of the extracted contour; a partial contour generation step of using the detected feature point information to partition the contours between the feature points into one or more regions in order to generate the partitioned regions as partial contours; and a contour matching step of selecting one or more of the generated partial contours and matching same with contour information.

Description

画像識別システム、画像識別方法、およびプログラムImage identification system, image identification method, and program
 本発明は、画像情報に含まれる一般的な物体または形状の認識や検索などを行うためのシステムに関し、特に、物体または形状のシルエット情報を用いて対象を識別処理する方法に関する。 The present invention relates to a system for recognizing or searching a general object or shape included in image information, and more particularly to a method for identifying a target using silhouette information of the object or shape.
 近年、デジタルカメラを始めとするデジタル映像機器の急速な普及に伴い、撮影された画像や映像のなかに、どのような物体が含まれているのかを識別する一般物体認識への期待が高まっている。
 一般物体認識では、データベース内に分類されずに格納されている画像データの適切な分類や、必要な画像データの検索などが行なわれている。またさらには、動画像の中からの所望のシーンの抽出や、所望のシーンだけを切り取っての再編集など、様々な用途に応用できる可能性を一般物体認識は有している。
 物体認識に関する技術として、顔認識や指紋認識など様々な認識技術がこれまでに開発されてきたが、これらは多くの場合特定の用途に向けられたものである。このような特定の用途に特化した認識技術は、別の用途に利用しようとすると能率よく動作しない。この問題点としては、認識率の低下や誤識別、情報処理量の増大などの問題が挙げられる。このため、一般的な物体の認識を能率よく行う技術の開発が期待されている。
 情報処理装置によって一般物体を認識する方法として、物体形状のシルエットを利用する方法が提案されており、広く利用されている(特許文献1、特許文献2)。これらの文献では、シルエット情報から、輪郭上の各点の曲率情報を用いて認識を行う手法が提案されている。
 曲率は、局所円弧の半径(曲率半径)の逆数として定義される。そのため、曲率それ自体は、物体の大きさに対しての不変量ではない。しかし、相対化を行うことによって、物体の大きさに対して不変に曲率を抽出することができる。
 また、似た物体のもつ輪郭の曲率情報は類似していることが知られているため、曲率情報を用いる上記技術は、輪郭形状が若干異なる輪郭に対しても、類似度を算出することが可能である。
 また、物体形状のシルエット情報からは、曲率情報だけではなく、曲率の特徴点である変曲点や、輪郭の位置座標など、物体形状に関する様々な特徴量を抽出することが可能である(非特許文献1)。
 このようにシルエット情報を用いることで物体輪郭の様々な特徴量を抽出できる一方、シルエットからの対象の識別処理には、複数物体が隣接してシルエットが結合していた場合に、容易に分離することができないという問題点がある(非特許文献2)。非特許文献2では、手書き文字が隣接して結合している場合には、結合している点がどこなのかを判別できないなどの問題を指摘している。
 こうした問題に対処するために、シルエットの結合を分析する際に、学習済みの結合パターンを用いる対処法が提案されている。例えば結合している文字数を二文字に制限したうえで、二文字の結合パターンを前もって学習しておき、その学習された結合パターンが出現した点でシルエットの切断を行うなど処理を行う。その後、分割されたシルエットを用いて対象の識別を行なう。しかしながら、こうした学習済みの結合パターンを用いる場合にも多数の問題点がある。例えば、結合している文字数(物体の数)が幾つなのかがわからない場合に対処できない問題を含んでいる。別の問題点を例示すれば、前もって学習した結合パターンでない結合パターンの場合に対処できない点が挙げられる。また、結合度合いによって結合パターンが実質的に異なることも問題点として挙げられる。
 このように、学習済みの結合パターンを用いる方法にも様々な課題が残されている。このため、文字数(物体の数)などの認識に用いる条件への制限を行うことなく、また、結合パターンを前もって学習することなく、シルエット情報から自由自在に能率よく形状を切り出す必要がある。
 こうした、結合パターンを前もって学習することなく、シルエット情報から形状を切り出す技術として、シルエットの輪郭上に存在する特徴的な点をもって切り出し候補点とし、切り出し候補点の間の輪郭情報を補完することによって形状の切り出しを行う方法が提案されている(特許文献3)。この手法を用いることによって、結合パターンを前もって学習することなく、シルエット情報から多くの形状を切り出すことが可能である。しかしながら、この手法によると、シルエットの輪郭上に特徴的な点が存在しない場合には、形状の切り出しを行うことができない。
In recent years, with the rapid spread of digital video equipment such as digital cameras, there is an increasing expectation for general object recognition that identifies what objects are included in captured images and videos. Yes.
In general object recognition, appropriate classification of image data stored without being classified in a database, retrieval of necessary image data, and the like are performed. Furthermore, general object recognition has the possibility of being applicable to various uses such as extraction of a desired scene from a moving image and re-editing by cutting out only the desired scene.
Various recognition technologies such as face recognition and fingerprint recognition have been developed so far as technologies related to object recognition, but these are often directed to specific applications. Such a recognition technology specialized for a specific application does not operate efficiently when it is used for another application. This problem includes problems such as a decrease in recognition rate, misidentification, and an increase in the amount of information processing. For this reason, development of a technique for efficiently recognizing a general object is expected.
As a method for recognizing a general object by an information processing apparatus, a method using a silhouette of an object shape has been proposed and widely used (Patent Documents 1 and 2). In these documents, a method for recognizing from silhouette information using curvature information of each point on the contour is proposed.
The curvature is defined as the reciprocal of the radius of the local arc (curvature radius). Therefore, the curvature itself is not an invariant with respect to the size of the object. However, by performing relativity, the curvature can be extracted invariably with respect to the size of the object.
In addition, since it is known that the curvature information of contours of similar objects are similar, the above technique using the curvature information can calculate the similarity even for contours with slightly different contour shapes. Is possible.
Further, from the silhouette information of the object shape, it is possible to extract not only the curvature information but also various feature quantities related to the object shape such as inflection points that are the characteristic points of curvature and the position coordinates of the contour (non- Patent Document 1).
While silhouette information can be used to extract various features of object outlines, the object identification process from silhouettes can be easily separated when multiple objects are adjacent and silhouettes are combined. There is a problem that it cannot be performed (Non-Patent Document 2). Non-Patent Document 2 points out a problem that when handwritten characters are adjacently combined, it is not possible to determine where the combined points are.
In order to deal with such problems, a coping method using a learned coupling pattern has been proposed when analyzing silhouette coupling. For example, after restricting the number of characters to be combined to two characters, learning is performed in advance for a combination pattern of two characters, and processing such as cutting the silhouette at the point where the learned combination pattern appears is performed. Thereafter, the target is identified using the divided silhouette. However, there are a number of problems when using such learned coupling patterns. For example, there is a problem that cannot be dealt with when the number of connected characters (the number of objects) is unknown. As another example, there is a point that cannot be dealt with in the case of a connection pattern that is not a previously learned connection pattern. Another problem is that the coupling pattern is substantially different depending on the degree of coupling.
As described above, various problems remain in the method using the learned connection pattern. For this reason, it is necessary to cut out the shape freely and efficiently from silhouette information without limiting the conditions used for recognition such as the number of characters (number of objects) and without learning the connection pattern in advance.
As a technique for extracting a shape from silhouette information without learning a connection pattern in advance, a characteristic point existing on the outline of the silhouette is used as a extraction candidate point, and the outline information between the extraction candidate points is complemented. A method of cutting out a shape has been proposed (Patent Document 3). By using this method, it is possible to cut out many shapes from silhouette information without learning the connection pattern in advance. However, according to this method, when there is no characteristic point on the outline of the silhouette, the shape cannot be cut out.
特開平10−055447号公報JP 10-055447 A PCT/JP2011/076599PCT / JP2011 / 075599 特願2011−268907号Japanese Patent Application No. 2011-268907
 シルエット情報を用いた一般物体の認識手法を用いる際には、複数の物体が結合しているシルエット情報から、単一物体に対応するシルエット情報を抽出する必要がある。複数物体のシルエット情報から単一物体毎のシルエット情報を抽出するためには、従来は、切り出しを行う特徴点を抽出することによってこれを行っていた。
 しかしながら、異なる二つの物体のシルエットの間に特徴的な点が存在しない場合、特徴点を抽出することができず、従来手法では、切り出しを行うことが困難であった。
 すなわち、一つまたは複数の物体形状のシルエット情報において、異なる二つのシルエットの間に特徴的な点が存在しない場合に、単一の閉輪郭の抽出を解決する適切な方法は、これまで考案されていなかった。
 本発明の目的は、シルエット情報に基づいて該シルエット情報に含まれる物体または形状の一般物体認識を行う画像識別システム、その方法、およびプログラムを提供することにある。
 本発明の別の目的は、一般物体認識を、シルエット情報から行い、単一の閉輪郭の抽出を能率よく試みる処理手法を提供することにある。
When using a general object recognition method using silhouette information, it is necessary to extract silhouette information corresponding to a single object from silhouette information in which a plurality of objects are combined. In order to extract silhouette information for each single object from silhouette information of a plurality of objects, conventionally, this is performed by extracting feature points to be cut out.
However, if there is no characteristic point between the silhouettes of two different objects, the characteristic point cannot be extracted, and it has been difficult to cut out with the conventional method.
That is, in the silhouette information of one or more object shapes, when there is no characteristic point between two different silhouettes, an appropriate method for solving the extraction of a single closed contour has been devised so far. It wasn't.
An object of the present invention is to provide an image identification system that performs general object recognition of an object or a shape included in silhouette information based on silhouette information, a method thereof, and a program.
Another object of the present invention is to provide a processing method for performing general object recognition from silhouette information and efficiently trying to extract a single closed contour.
 検出対象とする対象画像データから物体または形状の輪郭を抽出処理する輪郭抽出ステップと、抽出した輪郭から特徴点群を検出処理する特徴点検出ステップと、検出した特徴点情報を用いて、特徴点間の輪郭を一以上の領域に分割し、分割された領域を部分輪郭として生成する部分輪郭生成ステップと、生成した部分輪郭のうちひとつまたは複数を選択し、輪郭情報の照合処理を行う輪郭照合ステップと、を含み成ることを特徴とする物体または形状を識別処理する画像識別方法を実施する。 A contour extraction step for extracting a contour of an object or shape from target image data to be detected, a feature point detection step for detecting a feature point group from the extracted contour, and a feature point using the detected feature point information A contour generation step for generating a partial contour as a partial contour, and selecting one or more of the generated partial contours and performing contour information matching processing And an image identifying method for identifying and processing an object or shape.
 本発明によれば、シルエット情報に基づいて該シルエット情報に含まれる物体または形状の一般物体認識を行う画像識別システム、その方法、およびプログラムを提供できる。
 また、本発明によれば、一般物体認識を、シルエット情報から行い、単一の閉輪郭の抽出を能率よく試みる処理手法を提供できる。
ADVANTAGE OF THE INVENTION According to this invention, the image identification system, its method, and program which perform general object recognition of the object or shape contained in this silhouette information based on silhouette information can be provided.
In addition, according to the present invention, it is possible to provide a processing method that performs general object recognition from silhouette information and efficiently attempts to extract a single closed contour.
 図1は、第1の実施の形態における画像識別システムの構成図である。
 図2は、パターン認識の対象となる物体形状の一例を示す説明図である。
 図3は、シルエット情報における輪郭情報の一例を示す説明図である。
 図4は、輪郭上の特徴点の一例を示す説明図である。
 図5は、選択された部分輪郭の一例を示す説明図である。
 図6は、記憶部に保存された物体形状の一例を示す説明図である。
 図7は、第1の実施の形態における画像識別システムの動作の一例を示すフローチャートである。
 図8は、第2の実施の形態における画像識別システムの構成図である。
 図9は、輪郭を再構成する手法の一例を示す説明図である。
 図10は、第2の実施の形態における画像識別システムの動作の一例を示すフローチャートである。
 図11は、第3の実施の形態における画像識別システムの構成図である。
 図12は、特徴点間の輪郭の補正方法の一例を示す説明図である。
 図13は、パターン認識の対象となる物体形状の一例を示す説明図である。
 図14は、補完された輪郭の一例を示す説明図である。
 図15は、第3の実施の形態における画像識別システムの動作の一例を示すフローチャートである。
FIG. 1 is a configuration diagram of an image identification system according to the first embodiment.
FIG. 2 is an explanatory diagram illustrating an example of an object shape to be subjected to pattern recognition.
FIG. 3 is an explanatory diagram showing an example of contour information in silhouette information.
FIG. 4 is an explanatory diagram showing an example of feature points on the contour.
FIG. 5 is an explanatory diagram showing an example of the selected partial contour.
FIG. 6 is an explanatory diagram illustrating an example of an object shape stored in the storage unit.
FIG. 7 is a flowchart showing an example of the operation of the image identification system in the first embodiment.
FIG. 8 is a configuration diagram of an image identification system according to the second embodiment.
FIG. 9 is an explanatory diagram illustrating an example of a method for reconstructing a contour.
FIG. 10 is a flowchart illustrating an example of the operation of the image identification system according to the second embodiment.
FIG. 11 is a configuration diagram of an image identification system according to the third embodiment.
FIG. 12 is an explanatory diagram illustrating an example of a method for correcting a contour between feature points.
FIG. 13 is an explanatory diagram illustrating an example of an object shape to be subjected to pattern recognition.
FIG. 14 is an explanatory diagram illustrating an example of the complemented contour.
FIG. 15 is a flowchart illustrating an example of the operation of the image identification system according to the third embodiment.
 これまでシルエット情報を用いた一般物体の認識手法を用いる際には、複数の物体が結合しているシルエット情報から、単一物体に対応するシルエット情報を抽出する必要がある。複数物体のシルエット情報から単一物体毎のシルエット情報を抽出するためには、従来は、切り出しを行う特徴点を抽出することによってこれを行っていた。
 しかしながら、異なる二つの物体のシルエットの間に特徴的な点が存在しない場合、特徴点を抽出することができず、従来手法では、切り出しを行うことが困難であった。
 すなわち、一つまたは複数の物体形状のシルエット情報において、異なる二つのシルエットの間に特徴的な点が存在しない場合に、単一の閉輪郭の抽出を解決する適切な方法は、これまで考案されていなかった。
 こうした課題に対して、発明者らは以下のように一般物体認識を行なうシステムを提案する。
 本システムの処理動作は、4つのステップで示すことができる。
 第一のステップでは、一つまたは複数が含まれた物体のシルエット情報から取得できる全体の輪郭を抽出する。
 第二のステップでは、抽出した輪郭上における特徴的な点を検出する。
 第三のステップでは、前記第二のステップにおいて抽出した特徴的な点をもとに、輪郭を一つまたは複数に分割し、部分輪郭を生成する。
 第四のステップは、生成した部分輪郭情報、データベース情報に照合することによって、物体の特定を行う。
 以下、それぞれのステップ行なう画像識別システムについて、実施の形態を用いて説明する。
 本発明の実施の形態を図1乃至図15を参照して説明する。
[第1の実施の形態]
 本実施の形態を採用することにより、複数の物体形状が含まれたシルエット情報から、その中に含まれる単一の物体の認識を可能にする画像識別処理を行なう情報処理システムを提供できる。
[構成の説明]
 図1は、実施の一形態の認識システムの構成を示すブロック図である。なお、本発明に関連が薄い構成については、説明を簡略化又は省略する。
 画像識別システムは、情報処理の全体を管理する制御部10と、検出対象とする画像データを取得する画像情報取得部20、取得した画像データなどを記憶する画像情報記憶部30、予め抽出された物体および形状の特徴が記憶収集されてデータベース化されている輪郭情報記憶部40、認識結果を出力する照合結果出力部50を含み構成されている。また、画像認識システムは、本願発明に係る画像識別部100として、輪郭抽出部101、特徴点検出部102、部分輪郭生成部103、部分輪郭照合部104を含み構成されている。
 制御部10では、画像識別に関連する情報処理の全体動作を管理する。
 画像情報取得部20では、動画や写真などユーザが指定した画像データを本システムに取り込み、画像情報記憶部30に記憶する。取り込み方法は、ユーザが指定した画像情報をそのまま取得してもよいし、或いは輝度情報などを利用した白黒変換など、その後の処理が容易になるような変換を施して取得すればよい。画像情報取得部20によって取得される画像情報の例を図2に例示する。なお、動画像などから、任意の間隔などで自動的に収集して画像情報を取り込むようにすることも可能である。
 画像情報記憶部30では、取得した画像データや照合して得た結果、必要に応じて中間データ(閉輪郭の形状や補完輪郭、特徴点座標など)などを記憶する。画像情報記憶部30は、メモリーで構成してもよいし、HDDで構成してもよく、記憶部であればよい。
 輪郭情報記憶部40では、照会に使用されるデータが抽出されて記憶されている。なお、輪郭情報記憶部40として、外部データベースを参照することとしてもよい。
 照合結果出力部50では、画像識別部100で得られた結果を出力する。例えば、画像情報記憶部30から取得した照合結果と共に、輪郭情報記憶部40に記録されている照合結果の対象の物体や形状をモニターなどに出力する。なお、出力形態は、どのような情報を出力してもよい。
 輪郭抽出部101は、画像情報取得部20によって取得されて画像情報記憶部30に記録されている検出対象とする画像データから取得できる輪郭の全部あるいは一部のみを輪郭情報(シルエット情報)として抽出する。ここでの全部とは、個々に分かれた輪郭をすべて検出対象とすることを指し、一部のみとは、ユーザが指定した近傍の1つの輪郭や任意の個数の輪郭のみを検出対象とすることを指す。
 輪郭抽出部101は、対象画像データから、例えば、色相・彩度・明度などが急激に変化する点を、ラプラシアン・ガウシアンフィルタなどを用いて抽出する。ここでの急激に変化する点とは、色相・彩度・明度などの微分値が、前もって定められた閾値を超えるかどうかで判断すればよい。なお、輪郭情報を抽出する方法は、例示した方法に限定するものではない。
 抽出された物体の輪郭は、例えば、輪郭点の集まりとして表現すればよく、直交座標系を用いて個々の点を(x,y)などで表してデータ化すればよい。
 輪郭抽出部101によって抽出される輪郭情報の一例を図3に示す。このように、図2に示した数字パターン(「2」「3」が結合している2つの物体のシルエット)からなる一つのシルエット情報があった場合、図3のような外輪郭が抽出される。
 なお、輪郭は、対象画像データから複数取得できることもある。この取得された個々の輪郭に対して個々に以下で説明する処理を行えばよい。以降の説明ではひとつの輪郭のみが取得されたものとして説明する。
 特徴点検出部102は、検出対象とする輪郭上の特徴的な点(特徴点群)を検出処理する。この特徴点とは、例えば、下記式(1)で定義される曲率k(t)の値がゼロとなる点、すなわち変曲点を用いる。
 曲率k(t)は、輪郭上における任意の一点を始点として輪郭を一周するように取った輪郭座標t、および、輪郭を直交座標系(x,y座標系)で表現した際の、x,yのtについての一次微分値および二次微分値によって定義される(式(1)参照)。
Figure JPOXMLDOC01-appb-I000001
 この手法によって定義された特徴点を図3に示される輪郭上から検出した結果を、図4中に○印で示す。なお、特徴点を検出する方法は、上記したものに限定するものではない。例えば、前述した曲率k(t)の値が急激に変化する点を「接続点」と定義し、接続点を特徴点として採用する手法が考えられる。ここでの急激に変化する点とは、例えば、曲率の微分値が、前もって定められた閾値を超えるかどうかで判断する。「接続点」の定義は以上の定義に限らず、物体が接続を起こす特徴的な点となっていればどのような定義であっても構わない。
 部分輪郭生成部103は、特徴点検出部102において検出した特徴点情報を用いて、輪郭を一つまたは複数の領域に分割し、分割されたひとつひとつの領域を部分輪郭として生成する。部分輪郭は、特徴点情報を利用するいかなる手段を用いてもよいが、ふたつ以上の特徴点が検出された場合、特徴点を両端とする領域を部分輪郭として生成することが望ましい。また、輪郭抽出部101において抽出された輪郭が開輪郭であった場合、輪郭上の特徴点と輪郭の端点を両端とする領域を部分輪郭としてよい。
 部分輪郭照合部104は、部分輪郭生成部103によって生成された部分輪郭のうち、ひとつまたは複数の部分輪郭を選択し、選択したひとつまたは複数の部分輪郭を、輪郭情報記憶部40に記録されている照会データを参照して物体の識別処理を行う。部分輪郭の選択の際には、連続する部分輪郭を選択することが望ましい。照合処理は、既存の多くの手法が使用できる。例えば、部分輪郭毎の特徴量の比較による照合や、選択された全ての部分輪郭を用いた物体の位置を用いた照合などが挙げられる。図2の例で選択された部分輪郭を図5に太線で示すとともに、輪郭情報記憶部40に記録されている照会データの例を図6に示す。図6に示された照会データについても同様の処理を施し、部分輪郭を生成・選択するとともに、図5において選択されている部分輪郭と参照する。また、照合処理は、選定した補完輪郭を用いて、先に抽出した輪郭を分割した部分にあたる部分輪郭を用いて、輪郭情報の照合処理を行えばどのような方法でもよい。
[動作の説明]
 次に、実施の形態の動作例について説明する。図7は、本実施の形態の動作の例を示すフローチャートである。
 まず、画像情報取得部20は、ユーザが指定する対象画像データを取得して画像情報記憶部30に記録する(S1001)。画像情報の取得は、ユーザが指定したものに限らず、システムが自動的に、半自動的に取得してもよい。
 次に、輪郭抽出部101は、画像情報記憶部30に記録された検出対象とする対象画像データから物体または形状の輪郭情報(シルエット情報)を抽出処理する(S1002)。輪郭情報の抽出は、例えば閾値以上の輝度をもつピクセルなどを前もってユーザが基準を指定して、その基準を満たすもののみについて取得できる。
 次に、特徴点検出部102は、輪郭上の特徴的な点を検出して特徴点群として検出する(S1003)。
 次に、部分輪郭生成部103は、特徴点検出部102において検出した特徴点情報を用いて、輪郭を一つまたは複数の領域に分割し、分割されたひとつひとつの領域を部分輪郭として生成する(S1004)。
 次に、部分輪郭照合部104は、対象画像データから抽出された輪郭情報(シルエット情報)と補正された補完輪郭とを使用して、生成された部分輪郭のうちからひとつまたは複数の輪郭を選択(S1005)し、選択された部分輪郭と、輪郭情報記憶部40にある輪郭情報とを比較することによって、物体の照合処理を行う(S1006)。輪郭情報記憶部40に格納された輪郭情報と比較した際に、所望の物体が得られない場合は、部分輪郭の選択(S1005)を再度行い、同様の処理を行う(S1007)。
 この際、制御部10によって必要に応じて上記処理やパラメータを変更する処理を行ってもよい。条件を変えて処理を行えば精度の向上を図れる。その後、制御部10は、照合結果出力部50から、照合結果を出力する。
 このように画像識別システムを動作させることによって、含まれる物体の数が明確でないシルエット情報から、そのシルエットに含まれる単一の物体の認識を能率よく取得できることが期待できる。
 尚、画像識別システムの各部は、ハードウェアとソフトウェアの組み合わせを用いて実現すればよい。ハードウェアとソフトウェアとを組み合わせた形態では、RAMに画像識別プログラムが展開され、該プログラムに基づいて制御部(CPU)等のハードウェアを動作させることによって、各部を各種手段として実現する。また、該プログラムは、記憶媒体に記録されて頒布されても良い。当該記録媒体に記録されたプログラムは、有線、無線、又は記録媒体そのものを介して、メモリに読込まれ、制御部等を動作させる。尚、記録媒体を例示すれば、オプティカルディスクや磁気ディスク、半導体メモリ装置、ハードディスクなどが挙げられる。
 上記実施の形態を別の表現で説明すれば、画像識別システムとして動作させる情報処理装置を、RAMに展開された画像識別プログラムに基づき、輪郭抽出手段、特徴点検出手段、特徴点ペア生成手段、補完輪郭選択手段、輪郭照合手段として制御部を動作させることで実現することが可能である。
 以上説明したように、本発明によれば、シルエット情報に基づいて該シルエット情報に含まれる物体または形状の一般物体認識を行う画像識別システム、その方法、およびプログラムを提供できる。
 また、本発明によれば、一般物体認識を、シルエット情報から行い、単一の閉輪郭の抽出を能率よく試みる処理手法を提供できる。
 また、本発明の具体的な構成は前述の実施の形態に限られるものではなく、この発明の要旨を逸脱しない範囲の変更があってもこの発明に含まれる。
[第2の実施の形態]
 本発明を実施するための第2の実施の形態について、図面を参照して詳細に説明する。本実施の形態は、第1の実施の形態における部分輪郭情報に含まれる情報が十分ではない場合を想定している。こうした場合、第2の実施の形態は、輪郭情報記憶部に格納されている輪郭情報を参照し、部分輪郭を再構成する手法を採用する。このために、第2の実施の形態は、輪郭情報再構成部111を新たに設ける点が、第1の実施の形態とは異なる。このほか、いくつかの要素は第1の実施の形態とは異なる機能を有す。
[構成の説明]
 図8は、実施の一形態の認識システムの構成を示すブロック図である。なお、本発明に関連が薄い構成については、説明を簡略化又は省略する。
 輪郭情報再構成部111は、部分輪郭生成部103において生成された部分輪郭に対し、必要な部分輪郭を切断したり結合したりする方法で、部分輪郭を再構成する。この際の再構成方法はいかなる方法を用いてもよいが、輪郭情報記憶部40に記録されている既に記憶された物体の輪郭情報を参照することが望ましい。以下、図9を利用して、図2のシルエット情報を用いて輪郭情報再構成部111の採用する手法の一例を説明する。
 まず、図9内の(a)下部に示されているシルエット情報に関し、特徴点検出部102、部分輪郭生成部103、部分輪郭照合部104によって生成・選択された部分輪郭を(b)に示す。同時に、輪郭情報記憶部40内に含まれる(a)上部に示された形状の部分輪郭について、(b)下部の部分輪郭に対応する部分輪郭を(b)上部のように選択する。このとき、(b)上部に示される記憶内の形状について、選択されなかった部分輪郭が(c)のように決定される。これに対応する下部の形状内の部分輪郭は、(b)において選択された部分輪郭に隣接する部分輪郭として、(c)下部の二つの部分輪郭から構成すればよい。この再構成は、(c)上部に示される部分輪郭の長さや曲率などの幾何的情報を利用することで実施できる。ここでは、(c)上部に示される部分輪郭の長さ情報を利用して再構成した結果を(d)に示す。以上の一連の処理により、輪郭情報は再構成される。
[動作の説明]
 次に、実施の形態の動作例について説明する。図10は、本実施の形態の動作の例を示すフローチャートである。なお、本発明に関連が薄い動作については、説明を省略する。
 輪郭情報再構成部111は、S1004の後、部分輪郭生成部103において生成された部分輪郭に対し、必要な部分輪郭を切断したり結合したりする(S1101)。
 部分輪郭照合部104は、対象画像データから抽出された輪郭情報(シルエット情報)と補正された補完輪郭とを使用して、生成された部分輪郭のうちからひとつまたは複数の輪郭を選択(S1005)し、選択された部分輪郭と、輪郭情報記憶部40にある輪郭情報とを比較することによって、物体の照合処理を行う(S1006)。輪郭情報記憶部40を比較した際に、所望の物体が得られない場合は、輪郭情報の再構成(S1101)および部分輪郭の選択(S1005)を再度行い、同様の処理を行う(S1007)。
 この際、制御部10によって必要に応じて上記処理やパラメータを変更する処理を行ってもよい。条件を変えて処理を行えば精度の向上を図れる。その後、制御部10は、照合結果出力部50から、照合結果を出力する。
 以上、本実施の形態を実施することにより、部分輪郭情報に含まれる情報が十分ではない場合にも、物体の照合が可能となる。
 また、本発明の具体的な構成は前述の実施の形態に限られるものではなく、この発明の要旨を逸脱しない範囲の変更があってもこの発明に含まれる。
[第3の実施の形態]
 本発明を実施するための第3の実施の形態について、図面を参照して詳細に説明する。本実施の形態は、第1および第2の実施の形態における画像内の輪郭情報に含まれる情報が十分ではない場合を想定している。こうした場合、第3の実施の形態は、輪郭情報記憶部に格納されている輪郭情報を参照し、輪郭を補完する手法を採用する。このために、第3の実施の形態は、輪郭補完部121を新たに設ける点が、第2の実施の形態とは異なる。このほか、いくつかの要素は第2の実施の形態とは異なる機能を有す。
[構成の説明]
 図11は、実施の一形態の認識システムの構成を示すブロック図である。なお、本発明に関連が薄い構成については、説明を簡略化又は省略する。
 輪郭補完部121は、輪郭抽出部101において抽出された輪郭に対し、必要な輪郭情報を補完する。この際の補完方法はいかなる方法を用いてもよいが、以下に示すように、特徴点検出部102において検出された特徴点を用いて補完を行うことが望ましい。以下、特徴点検出部102において検出された特徴点を用いた補完方法を説明する。
 特徴点検出部102において検出された特徴点のうち、適切な組み合わせの二点間を補完する。二点は、任意の二点であってよいが、幾何的条件などを考慮し、適切なものを選択することが望ましい。例えば、二点上のそれぞれ接線のなす角が閾値以下であるといった条件や、補完を行った後に曲率の値が極度に不連続になる(例えば正負が入れ替わって不連続になるなど)ということがないというような条件で選択することが望ましい。また、ここでの補正は、いかなる補正方法を採用してもよく、第一の小ステップで採用した一次曲線(直線)による近似を採用してもよいが、ここでは特に、二点の位置関係と接線のなす角を用いる手法を、図12を用いて説明する。まず、点10001と点10002との、接線のなす角を、10003として計測する。次に、点10001と点10002とを両端とする線分長さ10004を、点10001と点10002の二点の距離として導出する。さらに、点10001と点10002を両端とする微小領域を円弧と仮定した場合の円弧の代表点を10005とすると、点10001と点10005との距離10006は、この微小領域を円弧としたときの半径(曲率半径)と一致する。また、円弧と仮定することにより、角10003と10007の値は等しくなる。これらの変数の幾何的関係を利用すると、式(2)によって曲率半径10006を導出することができる。
Figure JPOXMLDOC01-appb-I000002
 これにより、図13にあるような画像情報から、図14において太線で示されたような輪郭を抽出することができる。こうして再構成された輪郭について、部分輪郭照合部104の操作を施すことにより、輪郭情報の照合が可能になる。
[動作の説明]
 次に、実施の形態の動作例について説明する。図15は、本実施の形態の動作の例を示すフローチャートである。なお、本発明に関連が薄い動作については、説明を省略する。
 輪郭補完部121は、S1003の後、輪郭抽出部101において抽出された輪郭を補完する(S1201)。
 次に、部分輪郭生成部103は、特徴点検出部102において検出した特徴点情報を用いて、輪郭を一つまたは複数の領域に分割し、分割されたひとつひとつの領域を部分輪郭として生成する(S1004)。
 輪郭情報再構成部111は、S1004の後、部分輪郭生成部103において生成された部分輪郭に対し、必要な部分輪郭を切断したり結合したりすることで、部分輪郭を再構成する(S1101)。
 部分輪郭照合部104は、対象画像データから抽出された輪郭情報(シルエット情報)と補正された補完輪郭とを使用して、生成された部分輪郭のうちからひとつまたは複数の輪郭を選択(S1005)し、選択された部分輪郭と、輪郭情報記憶部40にある輪郭情報とを比較することによって、物体の照合処理を行う(S1006)。輪郭情報記憶部40を比較した際に、所望の物体が得られない場合は、S1201以降のステップを再度行い、同様の処理を行う(S1007)。
 以上、本実施の形態を実施することにより、輪郭情報に含まれる情報が十分ではない場合にも、物体の照合が可能となる。
 なお、上述の各実施の形態において、実施の形態の処理は、プログラム、ソフトウェア、又はコンピュータによって実行されることが可能な命令でコード化された、コンピュータ読み取り可能な記憶媒体によって実行されてもよい。記憶媒体には、光ディスク、フロッピー(登録商標)ディスク、ハードディスク等の可搬型の記録媒体が含まれることはもとより、ネットワークのようにデータを一時的に記録保持するような伝送媒体も含まれる。
 また、本発明の具体的な構成は前述の実施の形態に限られるものではなく、この発明の要旨を逸脱しない範囲の変更があってもこの発明に含まれる。
Conventionally, when using a general object recognition method using silhouette information, it is necessary to extract silhouette information corresponding to a single object from silhouette information in which a plurality of objects are combined. In order to extract silhouette information for each single object from silhouette information of a plurality of objects, conventionally, this is performed by extracting feature points to be cut out.
However, if there is no characteristic point between the silhouettes of two different objects, the characteristic point cannot be extracted, and it has been difficult to cut out with the conventional method.
That is, in the silhouette information of one or more object shapes, when there is no characteristic point between two different silhouettes, an appropriate method for solving the extraction of a single closed contour has been devised so far. It wasn't.
In response to these problems, the inventors propose a system for performing general object recognition as follows.
The processing operation of the system can be shown in four steps.
In the first step, the entire outline that can be acquired from the silhouette information of an object including one or more objects is extracted.
In the second step, characteristic points on the extracted contour are detected.
In the third step, the contour is divided into one or a plurality of parts based on the characteristic points extracted in the second step, and a partial contour is generated.
In the fourth step, an object is specified by collating the generated partial contour information and database information.
Hereinafter, an image identification system that performs each step will be described using an embodiment.
An embodiment of the present invention will be described with reference to FIGS.
[First Embodiment]
By adopting the present embodiment, it is possible to provide an information processing system that performs image identification processing that enables recognition of a single object included therein from silhouette information including a plurality of object shapes.
[Description of configuration]
FIG. 1 is a block diagram illustrating a configuration of a recognition system according to an embodiment. In addition, about the structure with little connection with this invention, description is simplified or abbreviate | omitted.
The image identification system includes a control unit 10 that manages the entire information processing, an image information acquisition unit 20 that acquires image data to be detected, an image information storage unit 30 that stores acquired image data, and the like. It includes a contour information storage unit 40 in which features of objects and shapes are stored and collected in a database, and a matching result output unit 50 that outputs a recognition result. In addition, the image recognition system includes a contour extraction unit 101, a feature point detection unit 102, a partial contour generation unit 103, and a partial contour matching unit 104 as the image identification unit 100 according to the present invention.
The control unit 10 manages the overall operation of information processing related to image identification.
In the image information acquisition unit 20, image data specified by the user such as a moving image or a photograph is taken into the system and stored in the image information storage unit 30. As the capturing method, the image information specified by the user may be acquired as it is, or may be acquired by performing conversion that facilitates subsequent processing, such as black and white conversion using luminance information or the like. An example of the image information acquired by the image information acquisition unit 20 is illustrated in FIG. It is also possible to automatically collect image information from a moving image or the like at an arbitrary interval or the like.
The image information storage unit 30 stores acquired image data, results obtained by collation, intermediate data (closed contour shape, complementary contour, feature point coordinates, etc.) and the like as necessary. The image information storage unit 30 may be configured by a memory, an HDD, or a storage unit.
In the contour information storage unit 40, data used for inquiry is extracted and stored. Note that an external database may be referred to as the contour information storage unit 40.
The collation result output unit 50 outputs the result obtained by the image identification unit 100. For example, together with the collation result acquired from the image information storage unit 30, the target object or shape of the collation result recorded in the contour information storage unit 40 is output to a monitor or the like. Note that any information may be output as the output form.
The contour extraction unit 101 extracts, as contour information (silhouette information), all or part of the contour that can be acquired from the image data to be detected that is acquired by the image information acquisition unit 20 and recorded in the image information storage unit 30. To do. Here, “all” means that the individual contours are all targeted for detection, and “only part” means that only one contour in the vicinity specified by the user or an arbitrary number of contours is targeted for detection. Point to.
The contour extraction unit 101 extracts, from the target image data, for example, points where the hue, saturation, brightness, and the like change rapidly using a Laplacian / Gaussian filter. The point of sudden change here may be determined by whether differential values such as hue, saturation and brightness exceed a predetermined threshold. Note that the method of extracting the contour information is not limited to the exemplified method.
The contour of the extracted object may be expressed as a collection of contour points, for example, and each point may be represented by (x, y) or the like using a Cartesian coordinate system.
An example of the contour information extracted by the contour extraction unit 101 is shown in FIG. As described above, when there is one silhouette information composed of the number patterns shown in FIG. 2 (the silhouettes of two objects joined by “2” and “3”), the outer contour as shown in FIG. 3 is extracted. The
A plurality of contours may be obtained from the target image data. What is necessary is just to perform the process demonstrated below with respect to this acquired each outline separately. In the following description, it is assumed that only one contour has been acquired.
The feature point detection unit 102 detects a characteristic point (a feature point group) on the contour to be detected. As this feature point, for example, a point at which the value of the curvature k (t) defined by the following formula (1) becomes zero, that is, an inflection point is used.
The curvature k (t) is the contour coordinate t taken so as to go around the contour starting from an arbitrary point on the contour, and x, x when the contour is expressed in an orthogonal coordinate system (x, y coordinate system) It is defined by a primary differential value and a secondary differential value with respect to t of y (see formula (1)).
Figure JPOXMLDOC01-appb-I000001
The result of detecting the feature points defined by this method from the contour shown in FIG. 3 is indicated by a circle in FIG. Note that the method of detecting the feature points is not limited to the above. For example, a method may be considered in which the point at which the value of the curvature k (t) described above changes abruptly is defined as a “connection point” and the connection point is adopted as a feature point. The point of sudden change here is determined by, for example, whether or not the differential value of the curvature exceeds a predetermined threshold value. The definition of “connection point” is not limited to the above definition, and any definition may be used as long as it is a characteristic point at which an object connects.
The partial contour generation unit 103 divides the contour into one or a plurality of regions using the feature point information detected by the feature point detection unit 102, and generates each divided region as a partial contour. For the partial contour, any means using the feature point information may be used, but when two or more feature points are detected, it is desirable to generate a region having the feature points as both ends as the partial contour. When the contour extracted by the contour extraction unit 101 is an open contour, a region having both the feature points on the contour and the end points of the contour as both ends may be used as the partial contour.
The partial contour matching unit 104 selects one or a plurality of partial contours from the partial contours generated by the partial contour generation unit 103, and the selected one or a plurality of partial contours are recorded in the contour information storage unit 40. The object is identified by referring to the inquiry data. When selecting partial contours, it is desirable to select continuous partial contours. Many existing methods can be used for the matching process. For example, collation by comparison of feature amounts for each partial contour, collation using object positions using all selected partial contours, and the like can be mentioned. The partial contour selected in the example of FIG. 2 is indicated by a thick line in FIG. 5, and an example of the inquiry data recorded in the contour information storage unit 40 is shown in FIG. Similar processing is performed on the inquiry data shown in FIG. 6 to generate and select a partial contour, and to refer to the selected partial contour in FIG. The collation process may be any method as long as the outline information collation process is performed by using the selected complementary outline and the partial outline corresponding to the part obtained by dividing the previously extracted outline.
[Description of operation]
Next, an operation example of the embodiment will be described. FIG. 7 is a flowchart showing an example of the operation of the present embodiment.
First, the image information acquisition unit 20 acquires target image data designated by the user and records it in the image information storage unit 30 (S1001). The acquisition of image information is not limited to the one specified by the user, and the system may acquire it automatically or semi-automatically.
Next, the contour extraction unit 101 extracts the contour information (silhouette information) of the object or shape from the target image data to be detected recorded in the image information storage unit 30 (S1002). For example, the contour information can be extracted only when the user designates a reference in advance, such as a pixel having a luminance equal to or higher than a threshold, and satisfies the reference.
Next, the feature point detection unit 102 detects a characteristic point on the contour and detects it as a feature point group (S1003).
Next, the partial contour generation unit 103 divides the contour into one or a plurality of regions using the feature point information detected by the feature point detection unit 102, and generates each divided region as a partial contour ( S1004).
Next, the partial contour matching unit 104 selects one or more contours from the generated partial contours using the contour information (silhouette information) extracted from the target image data and the corrected complementary contour. (S1005) Then, by comparing the selected partial contour with the contour information stored in the contour information storage unit 40, an object matching process is performed (S1006). When a desired object cannot be obtained when compared with the contour information stored in the contour information storage unit 40, the partial contour is selected again (S1005), and the same processing is performed (S1007).
At this time, the control unit 10 may perform processing for changing the above processing and parameters as necessary. If processing is performed under different conditions, the accuracy can be improved. Thereafter, the control unit 10 outputs a collation result from the collation result output unit 50.
By operating the image identification system in this way, it can be expected that recognition of a single object included in the silhouette can be efficiently acquired from silhouette information in which the number of included objects is not clear.
Note that each unit of the image identification system may be realized by using a combination of hardware and software. In a form in which hardware and software are combined, an image identification program is developed in the RAM, and hardware such as a control unit (CPU) is operated based on the program, thereby realizing each unit as various means. Further, the program may be recorded on a storage medium and distributed. The program recorded on the recording medium is read into a memory via a wired, wireless, or recording medium itself, and operates a control unit or the like. Examples of the recording medium include an optical disk, a magnetic disk, a semiconductor memory device, and a hard disk.
To describe the above-described embodiment in another expression, an information processing apparatus that operates as an image identification system is based on an image identification program developed in a RAM, contour extraction means, feature point detection means, feature point pair generation means, It can be realized by operating the control unit as a complementary contour selection unit and a contour collation unit.
As described above, according to the present invention, it is possible to provide an image identification system, a method, and a program for performing general object recognition of an object or a shape included in silhouette information based on silhouette information.
In addition, according to the present invention, it is possible to provide a processing method that performs general object recognition from silhouette information and efficiently attempts to extract a single closed contour.
In addition, the specific configuration of the present invention is not limited to the above-described embodiment, and changes within a range not departing from the gist of the present invention are included in the present invention.
[Second Embodiment]
A second embodiment for carrying out the present invention will be described in detail with reference to the drawings. The present embodiment assumes a case where information included in the partial contour information in the first embodiment is not sufficient. In such a case, the second embodiment employs a technique for reconstructing the partial contour with reference to the contour information stored in the contour information storage unit. For this reason, the second embodiment is different from the first embodiment in that a contour information reconstruction unit 111 is newly provided. In addition, some elements have functions different from those of the first embodiment.
[Description of configuration]
FIG. 8 is a block diagram illustrating a configuration of a recognition system according to an embodiment. In addition, about the structure with little connection with this invention, description is simplified or abbreviate | omitted.
The contour information reconstruction unit 111 reconstructs a partial contour by a method of cutting or combining necessary partial contours with respect to the partial contour generated by the partial contour generation unit 103. Any method may be used as the reconstruction method at this time, but it is preferable to refer to the already stored contour information of the object recorded in the contour information storage unit 40. Hereinafter, an example of a technique adopted by the contour information reconstruction unit 111 using the silhouette information of FIG. 2 will be described using FIG. 9.
First, regarding the silhouette information shown in the lower part of (a) in FIG. 9, a partial outline generated and selected by the feature point detection unit 102, the partial outline generation unit 103, and the partial outline matching unit 104 is shown in (b). . At the same time, (b) the partial outline corresponding to the lower partial outline is selected as shown in (b) upper part of the partial outline of the shape shown in (a) upper part included in the outline information storage unit 40. At this time, for the shape in the memory shown in (b) above, the partial contour not selected is determined as shown in (c). The partial contour in the lower shape corresponding to this may be composed of two partial contours in the lower portion as the partial contour adjacent to the partial contour selected in (b). This reconstruction can be performed by using geometric information such as the length and curvature of the partial contour shown in (c). Here, (d) shows the result of reconstruction using the length information of the partial contour shown in (c). The contour information is reconstructed by the series of processes described above.
[Description of operation]
Next, an operation example of the embodiment will be described. FIG. 10 is a flowchart showing an example of the operation of the present embodiment. The description of operations that are not related to the present invention will be omitted.
After S1004, the contour information reconstruction unit 111 cuts or joins the necessary partial contours to the partial contours generated by the partial contour generation unit 103 (S1101).
The partial contour matching unit 104 selects one or a plurality of contours from the generated partial contours using the contour information (silhouette information) extracted from the target image data and the corrected complementary contour (S1005). Then, the object matching process is performed by comparing the selected partial contour with the contour information stored in the contour information storage unit 40 (S1006). When the desired information cannot be obtained when comparing the contour information storage unit 40, the contour information is reconstructed (S1101) and the partial contour is selected (S1005), and the same processing is performed (S1007).
At this time, the control unit 10 may perform processing for changing the above processing and parameters as necessary. If processing is performed under different conditions, the accuracy can be improved. Thereafter, the control unit 10 outputs a collation result from the collation result output unit 50.
As described above, by implementing this embodiment, it is possible to collate an object even when the information included in the partial contour information is not sufficient.
In addition, the specific configuration of the present invention is not limited to the above-described embodiment, and changes within a range not departing from the gist of the present invention are included in the present invention.
[Third Embodiment]
A third embodiment for carrying out the present invention will be described in detail with reference to the drawings. In the present embodiment, it is assumed that the information included in the contour information in the images in the first and second embodiments is not sufficient. In such a case, the third embodiment employs a technique of referring to the contour information stored in the contour information storage unit and complementing the contour. For this reason, the third embodiment differs from the second embodiment in that a contour complementing part 121 is newly provided. In addition, some elements have functions different from those of the second embodiment.
[Description of configuration]
FIG. 11 is a block diagram illustrating a configuration of a recognition system according to an embodiment. In addition, about the structure with little connection with this invention, description is simplified or abbreviate | omitted.
The contour complementing unit 121 supplements necessary contour information with respect to the contour extracted by the contour extracting unit 101. Any method may be used as the complementing method at this time, but it is desirable to perform complementation using the feature points detected by the feature point detecting unit 102 as described below. Hereinafter, a complementing method using the feature points detected by the feature point detection unit 102 will be described.
Among the feature points detected by the feature point detection unit 102, two points in an appropriate combination are complemented. The two points may be two arbitrary points, but it is desirable to select an appropriate one in consideration of a geometric condition or the like. For example, the condition that the angle between the two tangents on the two points is less than the threshold, or the curvature value becomes extremely discontinuous after complementation (for example, it becomes discontinuous by switching between positive and negative). It is desirable to select on the condition that there is not. In addition, any correction method may be adopted for the correction here, and an approximation by a linear curve (straight line) adopted in the first small step may be adopted. A method of using an angle formed by a tangent line will be described with reference to FIG. First, the angle formed by the tangent line between the point 10001 and the point 10002 is measured as 10003. Next, a line segment length 10004 having both ends of the point 10001 and the point 10002 is derived as a distance between two points of the point 10001 and the point 10002. Further, assuming that a small area having both ends of the points 10001 and 10002 is an arc, the representative point of the circular arc is 10005, the distance 10006 between the point 10001 and the point 10005 is a radius when the small area is an arc. It matches (curvature radius). Also, by assuming a circular arc, the values of angles 10003 and 10007 are equal. Using the geometric relationship of these variables, the radius of curvature 10006 can be derived from equation (2).
Figure JPOXMLDOC01-appb-I000002
Thereby, it is possible to extract a contour as shown by a bold line in FIG. 14 from the image information as shown in FIG. Contour information can be collated by operating the partial contour collating unit 104 for the contour thus reconstructed.
[Description of operation]
Next, an operation example of the embodiment will be described. FIG. 15 is a flowchart showing an example of the operation of the present embodiment. The description of operations that are not related to the present invention will be omitted.
The contour complementing unit 121 supplements the contour extracted by the contour extracting unit 101 after S1003 (S1201).
Next, the partial contour generation unit 103 divides the contour into one or a plurality of regions using the feature point information detected by the feature point detection unit 102, and generates each divided region as a partial contour ( S1004).
After S1004, the contour information reconstruction unit 111 reconstructs the partial contour by cutting or combining the necessary partial contours with the partial contour generated by the partial contour generation unit 103 (S1101). .
The partial contour matching unit 104 selects one or a plurality of contours from the generated partial contours using the contour information (silhouette information) extracted from the target image data and the corrected complementary contour (S1005). Then, the object matching process is performed by comparing the selected partial contour with the contour information stored in the contour information storage unit 40 (S1006). If the desired object is not obtained when the contour information storage unit 40 is compared, the steps after S1201 are performed again, and the same processing is performed (S1007).
As described above, by implementing this embodiment, it is possible to collate an object even when the information included in the contour information is not sufficient.
In each of the above-described embodiments, the processing of the embodiment may be executed by a computer-readable storage medium encoded with a program, software, or an instruction that can be executed by a computer. . The storage medium includes not only a portable recording medium such as an optical disk, a floppy (registered trademark) disk, and a hard disk, but also a transmission medium that temporarily records and holds data such as a network.
In addition, the specific configuration of the present invention is not limited to the above-described embodiment, and changes within a range not departing from the gist of the present invention are included in the present invention.
 本発明によれば、一つまたは複数の物体形状のシルエット情報から、単一情報のそれを抽出することによって、一般的な物体の認識を可能にするので、画像の検索や、画像の分類といった用途に適用可能である。
 この出願は、2012年4月11日に出願された日本国特許出願第2012−90225号からの優先権を基礎として、その利益を主張するものであり、その開示はここに全体として参考文献として取り込む。
According to the present invention, it is possible to recognize a general object by extracting one piece of information from silhouette information of one or a plurality of object shapes. Applicable for use.
This application claims its benefit on the basis of priority from Japanese Patent Application No. 2012-90225 filed on April 11, 2012, the disclosure of which is hereby incorporated by reference in its entirety. take in.
10  制御部(制御手段)
20  画像情報取得部(画像情報取得手段)
30  画像情報記憶部(対象画像、結果)
40  輪郭情報記憶部(照会データ)
50  照合結果出力部(照合結果出力手段)
100 画像識別部(画像識別手段)
101 輪郭抽出部(輪郭抽出手段)
102 特徴点検出部(特徴点検出手段)
103 部分輪郭生成部(部分輪郭生成手段)
104 部分輪郭照合部(部分輪郭照合手段)
111 輪郭情報再構成部(輪郭情報再構成手段)
121 輪郭補完部(輪郭補完手段)
10001:特徴点ペアの一点
10002:特徴点ペアの一点
10003:特徴点ペアの接線のなす角
10004:特徴点ペア間の線分長
10005:特徴点ペア間を円弧としたときの円弧中心点
10006:円弧中心点と特徴点ペア各点とを両端とする線分
10007:円弧中心点を中心とし、特徴点ペア両端点のなす角
10 Control unit (control means)
20 Image information acquisition unit (image information acquisition means)
30 Image information storage unit (target image, result)
40 Contour information storage (inquiry data)
50 Verification result output unit (Verification result output means)
100 Image identification unit (image identification means)
101 Contour extraction unit (contour extraction means)
102 feature point detection unit (feature point detection means)
103 Partial contour generation unit (partial contour generation means)
104 Partial contour matching unit (partial contour matching means)
111 Outline information reconstruction unit (contour information reconstruction means)
121 Contour complement part (contour complement means)
10001: A point of a feature point pair 10002: A point of a feature point pair 10003: An angle formed by a tangent line of the feature point pair 10004: A line segment length between the feature point pairs 10005: An arc center point 10006 when an arc is formed between the feature point pairs : Line segment with both ends of arc center point and each point of feature point pair 10007: Angle formed by both ends of feature point pair with arc center point as center

Claims (19)

  1.  検出対象とする対象画像データから物体または形状の輪郭を抽出処理する輪郭抽出ステップと、
     抽出した輪郭から特徴点群を検出処理する特徴点検出ステップと、
     検出した特徴点情報を用いて、特徴点間の輪郭を一以上の領域に分割し、分割された領域を部分輪郭として生成する部分輪郭生成ステップと、
     生成した部分輪郭のうちひとつまたは複数を選択し、輪郭情報の照合処理を行う輪郭照合ステップと、を含むことを特徴とする物体または形状を識別処理する画像識別方法。
    A contour extraction step for extracting a contour of an object or shape from target image data to be detected;
    A feature point detecting step for detecting a feature point group from the extracted contour;
    Using the detected feature point information, the contour between the feature points is divided into one or more regions, and a partial contour generation step for generating the divided regions as partial contours;
    An image identification method for identifying an object or shape, comprising: a contour matching step that selects one or more of the generated partial contours and performs a contour information matching process.
  2.  前記画像識別方法は、
     前記選択された部分輪郭以外から、新たに部分輪郭を再構成する輪郭情報再構成ステップをさらに含み、再構成された部分輪郭を用いて輪郭照合ステップを実施する、ことを特徴とする請求項1に記載の物体または形状を識別処理する画像識別方法。
    The image identification method includes:
    2. The contour information reconstructing step for newly reconstructing a partial contour from other than the selected partial contour is further included, and the contour matching step is performed using the reconstructed partial contour. An image identification method for identifying and processing the object or shape described in 1.
  3.  前記画像識別方法は、
     前記検出した特徴点のうち任意の二点間の輪郭を補完する輪郭補完ステップをさらに含み、補完された輪郭を用いて前記部分輪郭生成ステップを実施する、ことを特徴とする請求項1または2に記載の物体または形状を識別処理する画像識別方法。
    The image identification method includes:
    3. The method further comprises a contour complementing step of complementing a contour between any two points among the detected feature points, and performing the partial contour generating step using the complemented contour. An image identification method for identifying and processing the object or shape described in 1.
  4.  前記特徴点は、前記輪郭上の接線の方位、或いは曲率が、急激に変化する点或いは、前もって定めた値の範囲内に含まれる点とする、ことを特徴とする請求項1から3の何れかに記載の物体または形状を識別処理する画像識別方法。 4. The feature point according to claim 1, wherein the feature point is a point at which the azimuth or curvature of the tangent line on the contour changes rapidly or is included in a predetermined value range. 5. An image identification method for performing identification processing on an object or shape according to claim 1.
  5.  前記部分輪郭を再構成する際には、前記照合対象の輪郭を参照し、輪郭の長さ、方位、曲率の何れかまたはすべてを用いてこれを行うことを特徴とする請求項1から4の何れかに記載の物体または形状を識別処理する画像識別方法。 The reconstructing of the partial contour is performed by referring to the contour to be collated and using any or all of the length, orientation, and curvature of the contour. An image identification method for identifying an object or shape according to any one of the above.
  6.  前記輪郭の補完を行う際には、前記複数個の特徴点の位置・接線の方位・曲率の一つまたは複数の情報が、定められた値の範囲内に含まれる場合のみ、補正を行うことを特徴とする請求項1から5の何れかに記載の物体または形状を識別処理する画像識別方法。 When the contour is complemented, correction is performed only when one or a plurality of pieces of information on the positions, tangent directions, and curvatures of the plurality of feature points are included within a predetermined value range. An image identification method for identifying an object or shape according to any one of claims 1 to 5.
  7.  検出対象とする対象画像データから物体または形状の輪郭を抽出処理する輪郭抽出手段と、
     抽出した輪郭から特徴点群を検出処理する特徴点検出手段と、
     検出した特徴点情報を用いて、特徴点間の輪郭を一以上の領域に分割し、分割された領域を部分輪郭として生成する部分輪郭生成手段と、
     生成した部分輪郭のうちひとつまたは複数を選択し、輪郭情報の照合処理を行う輪郭照合手段と、を備えることを特徴とする物体または形状を識別処理する画像識別システム。
    Contour extraction means for extracting a contour of an object or shape from target image data to be detected;
    Feature point detection means for detecting a feature point group from the extracted contour;
    Using the detected feature point information, the contour between the feature points is divided into one or more regions, and a partial contour generating means for generating the divided regions as partial contours;
    An image identification system for identifying an object or a shape, comprising: an outline collating unit that selects one or a plurality of generated partial outlines and performs an outline information matching process.
  8.  前記画像識別システムは、
     前記選択された部分輪郭以外から、新たに部分輪郭を再構成する輪郭情報再構成手段をさらに備え、再構成された部分輪郭を用いて輪郭照合手段を実施する、ことを特徴とする請求項7に記載の物体または形状を識別処理する画像識別システム。
    The image identification system includes:
    The contour information reconstructing means for newly reconstructing a partial contour from other than the selected partial contour is further provided, and the contour matching means is implemented using the reconstructed partial contour. An image identification system for identifying and processing the object or shape described in 1.
  9.  前記画像識別システムは、
     前記検出した特徴点のうち任意の二点間の輪郭を補完する輪郭補完手段をさらに備え、補完された輪郭を用いて前記部分輪郭生成手段を実施する、ことを特徴とする請求項7または8に記載の物体または形状を識別処理する画像識別システム。
    The image identification system includes:
    9. The apparatus according to claim 7, further comprising contour complementing means for complementing a contour between arbitrary two points among the detected feature points, wherein the partial contour generating means is implemented using the complemented contour. An image identification system for identifying and processing the object or shape described in 1.
  10.  前記特徴点は、前記輪郭上の接線の方位、或いは曲率が、急激に変化する点或いは、前もって定めた値の範囲内に含まれる点とする、ことを特徴とする請求項7から9の何れかに記載の物体または形状を識別処理する画像識別システム。 10. The feature point according to claim 7, wherein the feature point is a point at which the azimuth or curvature of the tangent line on the contour changes rapidly or a point included in a predetermined value range. An image identification system for identifying and processing the object or shape described in the above.
  11.  前記部分輪郭を再構成する際には、前記照合対象の輪郭を参照し、輪郭の長さ、方位、曲率の何れかまたはすべてを用いてこれを行うことを特徴とする請求項7から10の何れかに記載の物体または形状を識別処理する画像識別システム。 11. The reconstructing of the partial contour is performed by referring to the contour to be collated and using any or all of the length, direction, and curvature of the contour. An image identification system for performing identification processing on any object or shape.
  12.  前記輪郭の補完を行う際には、前記複数個の特徴点の位置・接線の方位・曲率の一つまたは複数の情報が、定められた値の範囲内に含まれる場合のみ、補正を行うことを特徴とする請求項7から11の何れかに記載の物体または形状を識別処理する画像識別システム。 When the contour is complemented, correction is performed only when one or a plurality of pieces of information on the positions, tangent directions, and curvatures of the plurality of feature points are included within a predetermined value range. The image identification system for identifying an object or a shape according to any one of claims 7 to 11.
  13.  検出対象とする対象画像データから物体または形状の輪郭を抽出処理する輪郭抽出処理と、
     抽出した輪郭から特徴点群を検出処理する特徴点検出処理と、
     検出した特徴点情報を用いて、特徴点間の輪郭を一以上の領域に分割し、分割された領域を部分輪郭として生成する部分輪郭生成処理と、
     生成した部分輪郭のうちひとつまたは複数を選択し、輪郭情報の照合処理を行う輪郭照合処理と、をコンピュータに実行させることを特徴とする画像識別プログラム。
    Contour extraction processing for extracting the contour of an object or shape from target image data to be detected; and
    A feature point detection process for detecting a feature point group from the extracted contour;
    Using the detected feature point information, the contour between the feature points is divided into one or more regions, and a partial contour generation process for generating the divided regions as partial contours;
    An image identification program that causes a computer to execute one or more of the generated partial contours and perform contour matching processing for matching contour information.
  14.  前記画像識別プログラムは、
     前記選択された部分輪郭以外から、新たに部分輪郭を再構成する輪郭情報再構成処理をさらに含み、再構成された部分輪郭を用いて輪郭照合処理をコンピュータに実行させることを特徴とする請求項13に記載の画像識別プログラム。
    The image identification program includes:
    The contour information reconstructing process for newly reconstructing a partial contour from other than the selected partial contour is further included, and the computer performs the contour matching process using the reconstructed partial contour. 13. The image identification program according to 13.
  15.  前記画像識別プログラムは、
     前記検出した特徴点のうち任意の二点間の輪郭を補完する輪郭補完処理をさらに含み、補完された輪郭を用いて前記部分輪郭生成処理をコンピュータに実行させることを特徴とする請求項13または14に記載の画像識別プログラム。
    The image identification program includes:
    14. The method according to claim 13, further comprising a contour complementing process for complementing a contour between any two points among the detected feature points, and causing the computer to execute the partial contour generating process using the complemented contour. 14. The image identification program according to 14.
  16.  前記特徴点は、前記輪郭上の接線の方位、或いは曲率が、急激に変化する点或いは、前もって定めた値の範囲内に含まれる点とする、ことを特徴とする請求項13から15の何れかに記載の画像識別プログラム。 16. The feature point according to any one of claims 13 to 15, wherein the feature point is a point at which the azimuth or curvature of the tangent line on the contour changes abruptly or falls within a predetermined value range. The image identification program according to Crab.
  17.  前記部分輪郭を再構成する際には、前記照合対象の輪郭を参照し、輪郭の長さ、方位、曲率の何れかまたはすべてを用いてこれを行うことを特徴とする請求項13から16の何れかに記載の画像識別プログラム。 The reconstruction of the partial contour is performed by referring to the contour to be collated and using any or all of the length, direction, and curvature of the contour. The image identification program in any one.
  18.  前記輪郭の補完を行う際には、前記複数個の特徴点の位置・接線の方位・曲率の一つまたは複数の情報が、定められた値の範囲内に含まれる場合のみ、補正を行うことを特徴とする請求項13から17の何れかに記載の画像識別プログラム。 When the contour is complemented, correction is performed only when one or a plurality of pieces of information on the positions, tangent directions, and curvatures of the plurality of feature points are included within a predetermined value range. The image identification program according to claim 13, wherein:
  19.  請求項13から18の何れか一項に記載の画像識別プログラムを記録したコンピュータ読み取り可能な記憶媒体。 A computer-readable storage medium in which the image identification program according to any one of claims 13 to 18 is recorded.
PCT/JP2013/060564 2012-04-11 2013-04-01 Image recognition system, image recognition method, and program WO2013154062A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-090225 2012-04-11
JP2012090225 2012-04-11

Publications (1)

Publication Number Publication Date
WO2013154062A1 true WO2013154062A1 (en) 2013-10-17

Family

ID=49327626

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/060564 WO2013154062A1 (en) 2012-04-11 2013-04-01 Image recognition system, image recognition method, and program

Country Status (2)

Country Link
JP (1) JPWO2013154062A1 (en)
WO (1) WO2013154062A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690132A (en) * 2022-10-20 2023-02-03 北京霍里思特科技有限公司 Image processing method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS59160274A (en) * 1983-03-02 1984-09-10 Hitachi Ltd Character cutting system
JPS60200376A (en) * 1984-03-26 1985-10-09 Hitachi Ltd Partial pattern matching system
JPH06309498A (en) * 1993-02-25 1994-11-04 Fujitsu Ltd Picture extracting system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS59160274A (en) * 1983-03-02 1984-09-10 Hitachi Ltd Character cutting system
JPS60200376A (en) * 1984-03-26 1985-10-09 Hitachi Ltd Partial pattern matching system
JPH06309498A (en) * 1993-02-25 1994-11-04 Fujitsu Ltd Picture extracting system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690132A (en) * 2022-10-20 2023-02-03 北京霍里思特科技有限公司 Image processing method and system

Also Published As

Publication number Publication date
JPWO2013154062A1 (en) 2015-12-17

Similar Documents

Publication Publication Date Title
US9953211B2 (en) Image recognition apparatus, image recognition method and computer-readable medium
JP4284288B2 (en) Pattern recognition apparatus and method
US9443137B2 (en) Apparatus and method for detecting body parts
JP5706647B2 (en) Information processing apparatus and processing method thereof
JP2014131277A5 (en) Method and program for compressing binary image representing document
US11049256B2 (en) Image processing apparatus, image processing method, and storage medium
JP2007316809A (en) Face collation apparatus and method, and program
TWI567660B (en) Multi-class object classifying method and system
US9489593B2 (en) Information processing apparatus and training method
US20130236108A1 (en) Object or shape information representation method
US8938132B2 (en) Image collation system, image collation method and computer program
JP3914864B2 (en) Pattern recognition apparatus and method
Wu et al. Privacy leakage of sift features via deep generative model based image reconstruction
JP2008251029A (en) Character recognition device and license plate recognition system
JP2016053763A (en) Image processor, image processing method and program
WO2013154062A1 (en) Image recognition system, image recognition method, and program
JP6126979B2 (en) Feature selection apparatus, method, and program
WO2013084731A1 (en) Image identifying system
Ngoc An efficient LBP-based descriptor for real-time object detection
WO2015178001A1 (en) Image matching system, image matching method, and recording medium storing program
KR101306576B1 (en) Illumination-robust face recognition system based on differential components
JP2014056415A (en) Image collation system, image collation method, and program
JP2005149395A (en) Character recognition device and license plate recognition system
JP5757293B2 (en) Object shape recognition method, object shape recognition system and program
JP6092024B2 (en) Character recognition apparatus, method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13775848

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014510156

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13775848

Country of ref document: EP

Kind code of ref document: A1