JP2014106713A - Program, method, and information processor - Google Patents

Program, method, and information processor Download PDF

Info

Publication number
JP2014106713A
JP2014106713A JP2012258792A JP2012258792A JP2014106713A JP 2014106713 A JP2014106713 A JP 2014106713A JP 2012258792 A JP2012258792 A JP 2012258792A JP 2012258792 A JP2012258792 A JP 2012258792A JP 2014106713 A JP2014106713 A JP 2014106713A
Authority
JP
Japan
Prior art keywords
closed region
closed
feature amount
point
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2012258792A
Other languages
Japanese (ja)
Inventor
Yoshihiro Kanamori
由博 金森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Tsukuba NUC
Original Assignee
University of Tsukuba NUC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Tsukuba NUC filed Critical University of Tsukuba NUC
Priority to JP2012258792A priority Critical patent/JP2014106713A/en
Publication of JP2014106713A publication Critical patent/JP2014106713A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

PROBLEM TO BE SOLVED: To provide a matching technique which is capable of suitably and automatically associating closed areas which should be collectively processed over a plurality of images.SOLUTION: A program which associates closed areas with each other among a plurality of images having the closed areas by an information processor 1 includes: a feature quantity extraction unit 21 which extracts feature quantities which are feature quantities of closed areas and are obtained by product sum operation of components at individual points in the closed areas or are obtained from distributions of distances between reference points and individual points in the closed areas; and a closed area association unit 23 which associates respective closed areas of a first image and those of a second image with each other on the basis of feature quantities of respective closed areas of images, which are extracted by the feature quantity extraction unit 21.

Description

本発明は、閉領域を有する複数の画像間で閉領域どうしを対応付けるプログラム、情報処理方法、及び情報処理装置に関する。   The present invention relates to a program, an information processing method, and an information processing apparatus for associating closed regions between a plurality of images having closed regions.

従来、手書きアニメーションの彩色工程では、紙に描かれた線画をスキャナによりデジタルデータ化し、コンピュータ上で、デジタルデータ化された画像1枚ずつを手作業で彩色するデジタルペインティングが行われている。   Conventionally, in a coloring process of handwritten animation, digital painting is performed in which a line drawing drawn on paper is converted into digital data by a scanner, and each digitalized image is manually colored on a computer.

特許第2835752号公報Japanese Patent No. 28355752

GARCIA TRIGO, P., JOHAN, H., IMAGIRE, T., AND NISHITA,T. "Interactive region matching for 2D animation coloring based on feature’s variation." IEICE Transactions (E92-D) , 2009, 6, p.1289-1295.GARCIA TRIGO, P., JOHAN, H., IMAGIRE, T., AND NISHITA, T. "Interactive region matching for 2D animation coloring based on feature's variation." IEICE Transactions (E92-D), 2009, 6, p.1289 -1295. MADEIRA, J. S., STORK, A., AND GROSS, M. H. "An approach to computer-supported cartooning." The Visual Computer12, 1996, 1, p.1-17.MADEIRA, J. S., STORK, A., AND GROSS, M. H. "An approach to computer-supported cartooning." The Visual Computer12, 1996, 1, p.1-17. QIU, J., SEAH, H. S. AND TIAN, F. "Auto coloring with character registration." In Proceedings of the 2006 internationalconference on Game research and development, CyberGames'06, 2006, p.25-32.QIU, J., SEAH, H. S. AND TIAN, F. "Auto coloring with character registration." In Proceedings of the 2006 internationalconference on Game research and development, CyberGames'06, 2006, p.25-32. SIVIC, J., AND ZISSERMAN, A. "Video Google: A text retrieval approach to object matching in videos." In Proceedings of the International Conference on Computer Vision, 2003, vol. 2, p.1470-1477.SIVIC, J., AND ZISSERMAN, A. "Video Google: A text retrieval approach to object matching in videos." In Proceedings of the International Conference on Computer Vision, 2003, vol. 2, p.1470-1477. SYKORA, D., DINGLIANA, J., AND COLLINS, S. "As-rigid-as-possible image registration for hand-drawn cartoon animations." In Proceedings of the 7th International Symposium on Non-Photorealistic Animation and Rendering, NPAR '09, 2009, p. 25-33.SYKORA, D., DINGLIANA, J., AND COLLINS, S. "As-rigid-as-possible image registration for hand-drawn cartoon animations." In Proceedings of the 7th International Symposium on Non-Photorealistic Animation and Rendering, NPAR ' 09, 2009, p. 25-33. SYKORA, D., DINGLIANA, J., AND COLLINS, S. "Lazy-Brush: Flexible painting tool for hand-drawn cartoons." Computer Graphics Forum 28, 2009, 2, p.599-608.SYKORA, D., DINGLIANA, J., AND COLLINS, S. "Lazy-Brush: Flexible painting tool for hand-drawn cartoons." Computer Graphics Forum 28, 2009, 2, p.599-608. SYKORA, D., BEN-CHEN, M., CADIK, M., WHITED, B., AND SIMMONS, M. "Textoons: practical texture mapping for hand-drawn cartoon animations." In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering, NPAR'11, 2011 , p. 75-84.SYKORA, D., BEN-CHEN, M., CADIK, M., WHITED, B., AND SIMMONS, M. "Textoons: practical texture mapping for hand-drawn cartoon animations." In Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Non-Photorealistic Animation and Rendering, NPAR'11, 2011, p. 75-84.

例えば、手書きアニメーションの線画の彩色工程では、一般に、線画を1コマずつ手動で彩色することが行われている。手書きアニメーションの連続するコマの線画は、同一のキャラクタが描かれる等、同一色で彩色されるべき対応する閉領域を含む場合が多い。しかし、従来、このような閉領域の対応関係を好適かつ自動的に導く方法がなく、対応する領域を一括して彩色できないという問題があった。   For example, in a line drawing coloring process of handwritten animation, line drawing is generally manually colored frame by frame. Line drawings of successive frames of handwritten animation often include corresponding closed regions that should be colored with the same color, such as the same character being drawn. However, there has conventionally been a problem that there is no method for suitably and automatically deriving the correspondence relationship between the closed regions, and the corresponding regions cannot be colored at once.

本発明は、上記の問題に鑑み、複数画像にわたって、一括して処理されるべき閉領域を、好適かつ自動的に対応付けることを可能とするマッチング技術を提供することを課題と
する。
In view of the above-described problems, an object of the present invention is to provide a matching technique capable of suitably and automatically associating closed regions to be processed collectively over a plurality of images.

本発明では、上記課題を解決するために、以下の手段を採用した。すなわち、本発明は、コンピュータによって、閉領域を有する複数の画像間で前記閉領域どうしを対応付けるプログラムであって、前記閉領域の特徴量であって、前記閉領域内の各点の座標の成分を積和演算することで得られる特徴量、又は基準点に対する前記閉領域内の各点の距離の分布から得られる特徴量を抽出する特徴量抽出工程と、前記特徴量抽出工程において抽出された、前記画像が有する各閉領域の前記特徴量に基づいて、第1の画像が有する各閉領域と第2の画像が有する各閉領域とを対応付ける閉領域対応付け工程と、を備えるプログラムである。   The present invention employs the following means in order to solve the above-described problems. That is, the present invention is a program for associating the closed regions with each other between a plurality of images having closed regions by a computer, the feature amount of the closed region, and the component of the coordinates of each point in the closed region Extracted in the feature amount extraction step, the feature amount extraction step for extracting the feature amount obtained from the product-sum operation, or the feature amount obtained from the distance distribution of each point in the closed region with respect to the reference point, and the feature amount extraction step And a closed region association step for associating each closed region of the first image with each closed region of the second image based on the feature amount of each closed region of the image. .

ここで、本発明において、画像は、白黒の線画であってもよく、色の濃淡や色彩を有する画像であってもよい。また、画像は、ラスタ画像であってもよく、ベクタ画像であってもよい。また、閉領域は、例えば、画像が線画である場合、線で囲われた閉じた領域である。   Here, in the present invention, the image may be a black-and-white line drawing, or may be an image having color shading or color. The image may be a raster image or a vector image. The closed region is, for example, a closed region surrounded by a line when the image is a line drawing.

また、本発明において、特徴量は、1つのスカラ値だけでなく、ベクトル値、複数の値の組である場合も含む。また、閉領域内は、閉領域内部だけでなく、閉領域の境界線上に位置する場合を含む。また、閉領域内の各点は、閉領域内の有限個の点だけでなく無限個の点であってもよい。ラスタ画像における閉領域内の各点は、例えば、閉領域内のすべての画素に対応する有限個の点や、互いにx軸方向、及びy軸方向の距離が一定となるような閉領域内の有限個の点である。また、ベクタ画像における閉領域の各点は、例えば、閉領域内にとることができるすべての点(無限個の点)である。また、座標は、直交座標系に限られず、極座標系における座標も含む。また、閉領域内の各点の座標の成分を積和演算することで得られる特徴量は、積和演算以外の演算もあわせて行うことで得られる特徴量を含む。また、各点の距離の分布は、離散的な値の分布である場合と連続的な無限個の値の分布である場合との双方を含む。   In the present invention, the feature amount includes not only one scalar value but also a vector value and a combination of a plurality of values. The closed region includes not only the inside of the closed region but also the case where the closed region is located on the boundary line of the closed region. Further, each point in the closed region may be an infinite number of points as well as a finite number of points in the closed region. Each point in the closed region in the raster image is, for example, a finite number of points corresponding to all the pixels in the closed region, or in the closed region where the distances in the x-axis direction and the y-axis direction are constant. A finite number of points. Further, each point of the closed region in the vector image is, for example, all points (infinite points) that can be taken in the closed region. Further, the coordinates are not limited to the orthogonal coordinate system, but also include coordinates in a polar coordinate system. In addition, the feature amount obtained by performing a product-sum operation on the coordinate components of each point in the closed region includes a feature amount obtained by performing an operation other than the product-sum operation. Further, the distance distribution of each point includes both a discrete value distribution and a continuous infinite value distribution.

また、本発明において、閉領域対応付け工程における対応付けには、第1の画像が有する閉領域のうち、その一部が第2の画像が有する閉領域と対応付けられない、対応付けも含まれる。例えば、第1の画像が有する閉領域の数と第2の画像が有する閉領域の数とが異なる場合に、多い閉領域の数を有する画像の一部の閉領域を対応付けないような対応付けも含まれる。また、閉領域対応付け工程における対応付けには、第1の画像が有する或る閉領域を、第2の画像が有する複数の閉領域と対応付ける、対応付けも含まれる。   In the present invention, the association in the closed region association step includes association in which a part of the closed region included in the first image is not associated with the closed region included in the second image. It is. For example, when the number of closed areas of the first image is different from the number of closed areas of the second image, a correspondence that does not associate a part of the closed areas of the image having a large number of closed areas Includes attachments. In addition, the association in the closed region association step includes association in which a certain closed region included in the first image is associated with a plurality of closed regions included in the second image.

本発明によれば、各閉領域内の各点から抽出される特徴量に基づいて画像間で閉領域どうしを対応付けるため、複数画像にわたって、一括して処理されるべき閉領域を、好適かつ自動的に対応付けることができる。   According to the present invention, in order to associate the closed regions between the images based on the feature amount extracted from each point in each closed region, the closed regions to be processed collectively over a plurality of images are preferably and automatically processed. Can be associated with each other.

また、本発明に係るプログラムの特徴量抽出工程において、前記閉領域内の各点の座標を積和演算することで得られる特徴量として、前記閉領域の慣性主軸の角度を抽出してもよい。   Further, in the feature amount extraction step of the program according to the present invention, the angle of the inertial principal axis of the closed region may be extracted as a feature amount obtained by multiplying the coordinates of each point in the closed region. .

ここで、本発明において、慣性主軸の角度は、ベクトルで表現されるものを含む。   Here, in the present invention, the angle of the inertial main axis includes that represented by a vector.

また、本発明に係るプログラムの特徴量抽出工程において、前記閉領域内の各点の座標を積和演算することで得られる特徴量を、前記閉領域内の各点の座標の各成分から計算される共分散行列の固有値及び固有ベクトルに基づいて抽出してもよい。   Further, in the feature amount extraction step of the program according to the present invention, a feature amount obtained by multiplying the coordinates of each point in the closed region is calculated from each component of the coordinates of each point in the closed region. The eigenvalues and eigenvectors of the covariance matrix may be extracted.

また、本発明に係るプログラムの特徴量抽出工程において、閉領域内の重心と閉領域内に各点の座標の共分散行列とを求め、求めた前記重心及び前記共分散行列を、期待値及び共分散行列に持つ2次元正規分布において、所定の確率密度となる座標を通る楕円を求め、楕円を構成する成分から前記閉領域の特徴量を抽出してもよい。   Further, in the feature amount extraction step of the program according to the present invention, a centroid in the closed region and a covariance matrix of the coordinates of each point in the closed region are obtained, and the obtained centroid and the covariance matrix are obtained as an expected value and In the two-dimensional normal distribution of the covariance matrix, an ellipse that passes through the coordinates having a predetermined probability density may be obtained, and the feature quantity of the closed region may be extracted from the components constituting the ellipse.

また、本発明に係るプログラムの特徴量抽出工程において、基準点に対する前記閉領域内の各点の距離の分布から得られる特徴量として、前記閉領域の境界線上を動く動点の始点からの道のりを変数とする、該動点の基準点からの距離の関数を求め、求めた該関数に対してフーリエ変換をした結果から、所定よりも低周波数の成分を抽出してもよい。   Further, in the feature amount extraction step of the program according to the present invention, as the feature amount obtained from the distribution of the distance of each point in the closed region with respect to the reference point, the path from the starting point of the moving point moving on the boundary line of the closed region It is also possible to obtain a function of the distance from the reference point of the moving point, using as a variable, and extract a component having a frequency lower than a predetermined value from the result of Fourier transform of the obtained function.

ここで、本発明において、フーリエ変換は、離散フーリエ変換を含む。   Here, in the present invention, the Fourier transform includes a discrete Fourier transform.

また、本発明に係るプログラムの特徴量抽出工程において、基準点に対する前記閉領域内の各点の距離の分布から得られる特徴量として、前記閉領域の境界線上の複数の点を所定の基準で選択し、選択した各点と基準点との距離を成分とするベクトルを抽出してもよい。   Further, in the feature amount extraction step of the program according to the present invention, a plurality of points on the boundary line of the closed region as a feature amount obtained from the distribution of the distance of each point in the closed region with respect to a reference point based on a predetermined reference It is also possible to select and extract a vector whose component is the distance between each selected point and the reference point.

また、本発明に係るプログラムの特徴量抽出工程において、基準点に対する前記閉領域内の各点の距離の分布から得られる特徴量として、基準点を原点とした前記閉領域内の各点の極座標における動径と偏角とを変量とする、2変量のヒストグラムを抽出してもよい。   Further, in the feature amount extraction step of the program according to the present invention, as the feature amount obtained from the distribution of the distance of each point in the closed region with respect to the reference point, polar coordinates of each point in the closed region with the reference point as the origin Alternatively, a bivariate histogram may be extracted with the radius and declination at.

また、本発明に係るプログラムの特徴量抽出工程において、前記閉領域の特徴量として、特徴量の抽出対象の閉領域に隣接する各閉領域の重心に基づいて算出される座標である隣接領域重心位置を、更に抽出してもよい。   Also, in the feature amount extraction step of the program according to the present invention, as the feature amount of the closed region, an adjacent region centroid that is a coordinate calculated based on the centroid of each closed region adjacent to the closed region from which the feature amount is to be extracted The position may be further extracted.

ここで、本発明において、隣接するとは、閉領域どうしがその境界線の一部を共有することをいう。   Here, in the present invention, adjoining means that the closed regions share a part of the boundary line.

また、本発明に係るプログラムは、前記特徴量抽出工程において抽出された前記特徴量に基づいて、2つの閉領域間の相違の程度を示すコストを算出するコスト算出工程を更に備え、前記閉領域対応付け工程は、対応付ける各閉領域間の前記コストの総和に基づいて、第1の画像が有する各閉領域を、第2の画像が有する1つ以下の前記閉領域と対応付けてもよい。   The program according to the present invention further includes a cost calculation step of calculating a cost indicating a degree of difference between two closed regions based on the feature amount extracted in the feature amount extraction step, and the closed region The association step may associate each closed region included in the first image with one or less of the closed regions included in the second image, based on the sum of the costs between the corresponding closed regions.

ここで、本発明において、閉領域対応付け手段が行う対応付けには、第1の画像が有する或る閉領域を、第2の画像が有する閉領域と対応付けない、対応付けが含まれる。   Here, in the present invention, the association performed by the closed region association unit includes association in which a certain closed region included in the first image is not associated with a closed region included in the second image.

また、本発明は、そのようなプログラムをコンピュータその他の装置、機械等が読み取り可能な記録媒体に記録したものでもよい。ここで、コンピュータ等が読み取り可能な記録媒体とは、データやプログラム等の情報を電気的、磁気的、光学的、機械的、又は化学的作用によって蓄積し、コンピュータ等から読み取ることができる記録媒体をいう。   Further, the present invention may be a program in which such a program is recorded on a recording medium readable by a computer, other devices, machines, or the like. Here, the computer-readable recording medium refers to a recording medium that stores information such as data and programs by electrical, magnetic, optical, mechanical, or chemical action and can be read from the computer or the like. Say.

また、本発明は、閉領域を有する複数の画像間で前記閉領域どうしを対応付ける方法であって、コンピュータによって、前記閉領域の特徴量であって、前記閉領域の各点の座標の成分を積和演算することで得られる特徴量、又は基準点に対する前記閉領域内の各点の距離の分布から得られる特徴量を抽出する特徴量抽出工程と、前記特徴量抽出工程において抽出された、前記画像が有する各閉領域の前記特徴量に基づいて、第1の画像が有する各閉領域と第2の画像が有する各閉領域とを対応付ける閉領域対応付け工程が実行される方法であってもよい。   Further, the present invention is a method for associating the closed regions between a plurality of images having closed regions, wherein the computer is used to calculate a feature amount of the closed region and a coordinate component of each point of the closed region. A feature amount extraction step for extracting a feature amount obtained by product-sum operation or a feature amount obtained from a distance distribution of each point in the closed region with respect to a reference point; and extracted in the feature amount extraction step, A method of executing a closed region associating step of associating each closed region included in the first image with each closed region included in the second image based on the feature amount of each closed region included in the image. Also good.

また、本発明は、閉領域を有する複数の画像間で前記閉領域どうしを対応付ける情報処理装置であって、前記閉領域の特徴量であって、前記閉領域内の各点の座標の成分を積和演算することで得られる特徴量、又は基準点に対する前記閉領域内の各点の距離の分布から得られる特徴量を抽出する抽出手段と、前記抽出手段によって抽出された、前記画像が有する各閉領域の前記特徴量に基づいて、第1の画像が有する各閉領域と第2の画像が有する各閉領域とを対応付ける閉領域対応付け手段を備える情報処理装置であってもよい。   The present invention is also an information processing apparatus for associating the closed regions with each other between a plurality of images having the closed region, wherein the closed region is a feature quantity, and a coordinate component of each point in the closed region is obtained. An extraction unit that extracts a feature amount obtained by product-sum operation or a feature amount obtained from a distance distribution of each point in the closed region with respect to a reference point; and the image extracted by the extraction unit The information processing apparatus may include a closed region association unit that associates each closed region included in the first image with each closed region included in the second image based on the feature amount of each closed region.

本発明の一側面によれば、複数画像にわたって、一括して処理されるべき閉領域を、好適かつ自動的に対応付けることを可能とするマッチング技術を提供することができる。   According to one aspect of the present invention, it is possible to provide a matching technique that can appropriately and automatically associate closed regions to be processed collectively over a plurality of images.

実施形態1に係るプログラムが実行される情報処理装置の構成を示す概略図である。It is the schematic which shows the structure of the information processing apparatus with which the program which concerns on Embodiment 1 is performed. 実施形態1に係るプログラムが対応付ける複数の画像の例である。It is an example of the some image which the program concerning Embodiment 1 matches. 実施形態1に係るプログラムが扱う閉領域の例を示す図である。It is a figure which shows the example of the closed area | region which the program concerning Embodiment 1 handles. 実施形態1に係るプログラムが実行される情報処理装置の機能構成の概略を示す図である。It is a figure which shows the outline of a function structure of the information processing apparatus with which the program which concerns on Embodiment 1 is performed. 閉領域の慣性主軸の例を示す図である。It is a figure which shows the example of the inertia principal axis of a closed area | region. 各閉領域に係る代表の楕円の例を示す図である。It is a figure which shows the example of the representative ellipse which concerns on each closed area | region. 2画像間での閉領域どうしの対応付けの例を示すイメージ図である。It is an image figure which shows the example of matching of the closed area | regions between two images. 複数の画像間での閉領域どうしの対応付けの例を示すイメージ図である。It is an image figure which shows the example of matching of the closed area | regions between several images. 複数の画像間で閉領域どうしを対応付ける処理の流れ示すフローチャートである。It is a flowchart which shows the flow of the process which matches closed areas between several images. 2画像間で閉領域どうしを対応付ける処理の流れ示すフローチャートである。It is a flowchart which shows the flow of the process which matches closed regions between two images. 閉領域に係る動点の例を示す図である。It is a figure which shows the example of the moving point which concerns on a closed area | region. 動点と重心との距離の関数の例を示すグラフである。It is a graph which shows the example of the function of the distance of a moving point and a gravity center. 選択した各点と重心との距離を示すイメージ図である。It is an image figure which shows the distance of each selected point and a gravity center.

以下、本発明の実施の形態について、図面に基づいて説明する。本実施形態において、複数の画像は、アニメーションを構成する各フレーム(コマ)の線画として実施される。なお、以下に説明する実施の形態は、本発明を実施する一例を示すものであって、本発明を以下に説明する具体的構成に限定するものではない。本発明を実施するにあたっては、実施の形態に応じた具体的構成が適宜採用されることが好ましい。   Hereinafter, embodiments of the present invention will be described with reference to the drawings. In the present embodiment, the plurality of images is implemented as a line drawing of each frame (frame) constituting the animation. The embodiment described below shows an example for carrying out the present invention, and the present invention is not limited to the specific configuration described below. In practicing the present invention, it is preferable to adopt a specific configuration according to the embodiment as appropriate.

≪実施形態1≫
<構成>
図1は、実施形態1に係るプログラムが実行される情報処理装置の構成を示す概略図である。
Embodiment 1
<Configuration>
FIG. 1 is a schematic diagram illustrating a configuration of an information processing apparatus that executes a program according to the first embodiment.

情報処理装置1は、CPU(Central Processing Unit)11、RAM(Random Access Memory)12、ROM(Read Only Memory)13、HDD(Hard Disk Drive)等の補助記憶装置14、及び周辺機器を接続する外部インターフェース15を備えたコンピュータである。   The information processing apparatus 1 includes a CPU (Central Processing Unit) 11, a RAM (Random Access Memory) 12, a ROM (Read Only Memory) 13, an auxiliary storage device 14 such as a HDD (Hard Disk Drive), and an external device that connects peripheral devices. A computer provided with an interface 15.

CPU11は、中央処理装置であり、RAM12等に展開された各種プログラムの命令
およびデータを処理することで、RAM12、補助記憶装置14等を制御する。RAM12は、主記憶装置であり、CPU11によって制御され、各種命令やデータが書き込まれ、読み出される。補助記憶装置14は、不揮発性の補助記憶装置であり、RAM12にロードされるOS(Operating System)や本実施形態に係るプログラム等の各種プログラム、アニメーションの各フレームの線画等、主にコンピュータの電源を落としても保持したい情報が書き込まれ、読み出される。外部インターフェース15には、スキャナ2が接続されている。スキャナ2は、紙媒体に描かれた複数の線画3を読み取る。読み取られた線画3は、デジタル画像として情報処理装置1に取り込まれ、アニメーションの各フレームを構成する。
The CPU 11 is a central processing unit, and controls the RAM 12, the auxiliary storage device 14 and the like by processing instructions and data of various programs developed in the RAM 12 and the like. The RAM 12 is a main storage device and is controlled by the CPU 11 to write and read various commands and data. The auxiliary storage device 14 is a non-volatile auxiliary storage device, and is mainly used as a computer power source such as an OS (Operating System) loaded in the RAM 12, various programs such as the program according to the present embodiment, line drawings of animation frames, and the like. The information that you want to keep even if you drop is written and read. The scanner 2 is connected to the external interface 15. The scanner 2 reads a plurality of line drawings 3 drawn on a paper medium. The read line drawing 3 is taken into the information processing apparatus 1 as a digital image and constitutes each frame of the animation.

図2は、実施形態1に係るプログラムが対応付ける複数の画像の例である。図2には、複数の画像として、アニメーションにおいて順に連続して表示されるフレームF1、F2、F3及びF4が示されている。本実施形態において、各フレームの線画は、所定の画像サイズのラスタ画像として与えられる。本実施形態において、各フレームは、左上の角を原点とし、水平右方向にx軸、垂直下方向にy軸を取った座標系有する。なお、各フレー
ムの線画は、ベクタ画像として与えられてもよい。ベクタ画像として与えられた場合は、プログラムは、与えられた画像を、ラスタ画像に変換して取り扱ってもよいし、ベクタ画像のまま取り扱ってもよい。
FIG. 2 is an example of a plurality of images associated with the program according to the first embodiment. FIG. 2 illustrates frames F1, F2, F3, and F4 that are sequentially displayed in the animation as a plurality of images. In the present embodiment, the line drawing of each frame is given as a raster image having a predetermined image size. In the present embodiment, each frame has a coordinate system with the upper left corner as the origin, the x axis in the horizontal right direction, and the y axis in the vertical lower direction. Note that the line drawing of each frame may be given as a vector image. When given as a vector image, the program may handle the given image by converting it to a raster image, or may treat it as a vector image.

図3は、実施形態1に係るプログラムが扱う閉領域の例を示す図である。図3には、図2のフレームF1の線画が有する閉領域の例として、閉領域R101(斜線部分)、R102(網掛け部分)、及びR103(チェック部分)が示されている。フレームF1の線画が有する閉領域は、これらの閉領域に限らず、他の線(黒色の画素)で囲まれた部分すべて(白色の画素)が該当する。なお、濃淡を有する画像に対しては、所定以上の濃さを有する画素や線に囲われた部分を閉領域と扱ってもよい。また、彩色を有する画像に対しては、黒色の線で囲まれた部分を閉領域として扱い、黒色以外の色彩を無視してもよい。また、彩色を有する画像に対しては、所定の色彩を有する画素で囲われた部分を閉領域として扱ってもよい。   FIG. 3 is a diagram illustrating an example of a closed region handled by the program according to the first embodiment. FIG. 3 shows a closed region R101 (shaded portion), R102 (shaded portion), and R103 (check portion) as examples of the closed region included in the line drawing of the frame F1 in FIG. The closed region included in the line drawing of the frame F1 is not limited to these closed regions, but includes all the portions (white pixels) surrounded by other lines (black pixels). For an image having light and shade, a portion surrounded by pixels or lines having a darkness greater than a predetermined value may be treated as a closed region. In addition, for an image having a chromatic color, a portion surrounded by a black line may be treated as a closed region, and colors other than black may be ignored. In addition, for an image having a chromatic color, a portion surrounded by pixels having a predetermined color may be treated as a closed region.

図4は、実施形態1に係るプログラムが実行される情報処理装置1の機能構成の概略を示す図である。本実施形態に係るプログラムは、RAM12に読み出され、CPU11によって実行されることで、特徴量抽出部21、コスト算出部22、及び閉領域対応付け部23として機能する。なお、本実施形態では、情報処理装置1の備える各機能は、汎用プロセッサであるCPU11によって実行されるが、これらの機能の一部又は全部は、1又は複数の専用プロセッサによって実行されてもよい。   FIG. 4 is a diagram illustrating an outline of a functional configuration of the information processing apparatus 1 that executes the program according to the first embodiment. The program according to the present embodiment is read into the RAM 12 and executed by the CPU 11, thereby functioning as a feature amount extraction unit 21, a cost calculation unit 22, and a closed region association unit 23. In the present embodiment, each function of the information processing apparatus 1 is executed by the CPU 11 that is a general-purpose processor. However, part or all of these functions may be executed by one or a plurality of dedicated processors. .

本実施形態において、特徴量抽出部21は、閉領域の特徴量であって、閉領域内の各点の座標の成分を積和演算することで得られる特徴量として、閉領域の慣性主軸の角度を抽出する。具体的には、特徴量抽出部21は、閉領域内のすべての白色画素(各点に相当)の座標の各成分から計算される共分散行列の固有値及び固有ベクトルを算出し、算出した固有ベクトルから慣性主軸の角度を求める。   In the present embodiment, the feature amount extraction unit 21 is a feature amount of the closed region, and is a feature amount obtained by multiply-adding the components of the coordinates of each point in the closed region. Extract the angle. Specifically, the feature quantity extraction unit 21 calculates eigenvalues and eigenvectors of the covariance matrix calculated from the components of the coordinates of all white pixels (corresponding to each point) in the closed region, and calculates the eigenvectors from the calculated eigenvectors. Find the angle of the inertial spindle.

本実施形態において、特徴量抽出部21は、閉領域の特徴量として、当該共分散行列の固有値、及び閉領域内の白色画素の平均座標(閉領域の重心の位置に相当する)を、更に抽出する。また、特徴量抽出部21は、閉領域の特徴量として、特徴量の抽出対象の閉領域に隣接する各閉領域の重心に基づいて算出される座標である隣接領域重心位置を、更に抽出する。つまり、本実施形態の特徴量抽出部21は、閉領域の特徴量として、閉領域の慣性主軸の角度、閉領域内の白色画素の座標の各成分から計算される共分散行列の固有値、閉領域内の白色画素の平均座標、及び隣接領域重心位置を抽出する。以下、詳細を説明する。   In the present embodiment, the feature amount extraction unit 21 further uses the eigenvalue of the covariance matrix and the average coordinates of white pixels in the closed region (corresponding to the position of the center of gravity of the closed region) as the feature amount of the closed region. Extract. Further, the feature amount extraction unit 21 further extracts an adjacent region centroid position, which is a coordinate calculated based on the centroid of each closed region adjacent to the closed region from which the feature amount is to be extracted, as the feature amount of the closed region. . That is, the feature amount extraction unit 21 of the present embodiment uses the eigenvalues of the covariance matrix calculated from the components of the inertial principal axis of the closed region and the coordinates of the white pixels in the closed region as the feature amount of the closed region. The average coordinates of white pixels in the area and the barycentric position of the adjacent area are extracted. Details will be described below.

まず、特徴量抽出部21は、閉領域i内の全画素の平均座標tiを算出する。この平均
座標は閉領域の重心にあたる。次に、特徴量抽出部21は、閉領域i内の全画素の座標の共分散行列Ciを求める。ここで、共分散行列Ciを求めるにあたり、座標のx成分及びy成分を確率変数として扱う。共分散行列Ciは、閉領域i内の全画素数をN、各画素の座標を示す縦ベクトルをpm(m=1,2,…,N)、平均座標tiを縦ベクトルとして、数式1で表される。なお、記号Tは、ベクトルの転置を意味する。

Figure 2014106713
First, the feature quantity extraction unit 21 calculates an average coordinate t i of all pixels in the closed region i. This average coordinate corresponds to the center of gravity of the closed region. Next, the feature quantity extraction unit 21 obtains a covariance matrix C i of the coordinates of all the pixels in the closed region i. Here, in obtaining the covariance matrix C i , the x component and the y component of the coordinates are treated as random variables. The covariance matrix C i uses N as the total number of pixels in the closed region i, p m (m = 1, 2,..., N) as the vertical vector indicating the coordinates of each pixel, and the average coordinate t i as the vertical vector. It is expressed by Formula 1. The symbol T means vector transposition.
Figure 2014106713

本実施形態において、共分散行列Ciを求めることは、閉領域内の各点の座標の成分を
積和演算することに相当する。共分散行列Ciは、例えば、tiのx成分、y成分をtix、tiyとし、pmのx成分、y成分をpmx、pmyとして、数式2となる。

Figure 2014106713
数式2より、x座標成分の2乗の和(Σ)、y座標成分の2乗の和、x座標成分とy座標成分との積の和が演算されることがわかる。 In the present embodiment, obtaining the covariance matrix C i corresponds to performing a product-sum operation on the coordinate components of each point in the closed region. Covariance matrix C i, for example, the x component of t i, a y-component t ix, and t iy, x component of p m, the y component p mx, as p my, the equation 2.
Figure 2014106713
From Equation 2, it can be seen that the sum of the squares of the x coordinate components (Σ), the sum of the squares of the y coordinate components, and the sum of the products of the x coordinate components and the y coordinate components are calculated.

次に、特徴量抽出部21は、共分散行列Ciの大小の固有値を求め、大きい固有値をλi maxとし、小さい固有値をλi minとする。次に、特徴量抽出部21は、固有値λi maxに対
する固有ベクトルei max、及び固有値λi minに対する固有ベクトルei minを求める。次に、特徴量抽出部21は、固有ベクトルei maxの方向を、閉領域の特徴量である、慣性主軸の角度として抽出する。また、特徴量抽出部21は、特徴量として、固有値λi max、λi min、平均座標tiも更に抽出する。
Next, the feature quantity extraction unit 21 obtains large and small eigenvalues of the covariance matrix C i , and sets the large eigenvalue to λ i max and the small eigenvalue to λ i min . Next, the feature extraction unit 21 obtains the eigenvector e i min for the eigenvalue lambda i eigenvector for max e i max, and eigenvalue lambda i min. Next, the feature quantity extraction unit 21 extracts the direction of the eigenvector e i max as the angle of the inertia main axis, which is the feature quantity of the closed region. The feature quantity extraction unit 21 further extracts eigenvalues λ i max , λ i min , and average coordinates t i as feature quantities.

なお、固有値λi max及びλi min、固有ベクトルei max及びei min、並びに平均座標ti
により、閉領域内の白色画素の座標の分布を代表する楕円を構成することができる。閉領域iの座標の分布を代表する楕円の長軸Li及び短軸Siは、数式3で定義できる。

Figure 2014106713
また、楕円の中心は、重心tiで定義できる。特徴量抽出部21が抽出する、慣性主軸の
角度、固有値、及び平均座標は、それぞれ、この楕円の傾き、大きさ、及び重心に対応する。なお、この楕円は、重心ti及び共分散行列Ciを、期待値及び共分散行列に持つ2次元正規分布において、所定の確率密度となる座標を通る楕円とみることもできる。 Note that eigenvalues λ i max and λ i min , eigenvectors e i max and e i min , and average coordinates t i
Thus, an ellipse representing the distribution of coordinates of white pixels in the closed region can be formed. The major axis Li and minor axis Si of the ellipse representing the distribution of the coordinates of the closed region i can be defined by Equation 3.
Figure 2014106713
The center of the ellipse may be defined by the center of gravity t i. The inertia principal axis angle, eigenvalue, and average coordinate extracted by the feature quantity extraction unit 21 correspond to the inclination, size, and center of gravity of the ellipse, respectively. Note that this ellipse can also be regarded as an ellipse passing through coordinates having a predetermined probability density in the two-dimensional normal distribution having the centroid t i and the covariance matrix C i in the expected value and the covariance matrix.

図5は、閉領域の慣性主軸の例を示す図である。図5で、慣性主軸I101は、閉領域
R101に対する慣性主軸の角度(方向)を示している。楕円E101は、閉領域R10
1内の各画素の座標の分布を代表する楕円である。重心C101は、閉領域R101の重心であり、楕円E101の重心(中心)である。慣性主軸I101の角度は、楕円E10
1の長軸の方向と一致する。
FIG. 5 is a diagram illustrating an example of the inertial spindle in the closed region. In FIG. 5, an inertial principal axis I101 indicates an angle (direction) of the inertial principal axis with respect to the closed region R101. Ellipse E101 is closed region R10
1 is an ellipse representative of the distribution of coordinates of each pixel within 1. The center of gravity C101 is the center of gravity of the closed region R101 and the center of gravity (center) of the ellipse E101. The angle of the inertial main axis I101 is an ellipse E10
It coincides with the direction of the major axis of 1.

図6は、各閉領域に係る代表の楕円の例を示す図である。図6には、フレームF1及びF2の各閉領域内の座標の分布を代表する楕円が示されている。図6において、特徴量抽出部21によって抽出された特徴量(慣性主軸の角度、固有値、及び平均座標)は、各楕円により視覚的に示されている。   FIG. 6 is a diagram illustrating an example of a representative ellipse relating to each closed region. FIG. 6 shows an ellipse representing the distribution of coordinates within each closed region of the frames F1 and F2. In FIG. 6, the feature quantities (inertial principal axis angle, eigenvalues, and average coordinates) extracted by the feature quantity extraction unit 21 are visually indicated by ellipses.

本実施形態によれば、共分散行列Ciは2行2列の行列であり、その固有値及び固有ベ
クトルの算出は比較的容易であるため、閉領域の形状全体を反映した特徴量を比較的少ない計算量で抽出することができる。
According to the present embodiment, the covariance matrix C i is a 2-by-2 matrix, and the eigenvalues and eigenvectors thereof are relatively easy to calculate. Therefore, the feature amount reflecting the entire shape of the closed region is relatively small. It can be extracted with a calculation amount.

なお、本実施形態において、特徴量抽出部21は、共分散行列Ciの固有ベクトルei maxから閉領域の慣性主軸の角度を求めたが、特徴量抽出部21は、数式4により、慣性主
軸の角度θを抽出してもよい。

Figure 2014106713
In the present embodiment, the feature quantity extraction unit 21 obtains the angle of the inertia main axis of the closed region from the eigenvector e i max of the covariance matrix C i. The angle θ may be extracted.
Figure 2014106713

なお、特徴量抽出部21が共分散行列Ciを求める際に用いる座標には、閉領域の境界
線上の画素の座標を含めてもよい。また、特徴量抽出部21が共分散行列Ciを求める際
に用いる座標には、閉領域内の白色画素のうち、x軸方向(水平方向)、y軸方向(垂直方向)それぞれについて、所定ピクセルごとの間隔で、間引いた白色画素についての座標を採用してもよい。また、特徴量抽出部21が共分散行列Ciを求める際に用いる座標に
は、乱数に基づいて選ばれた、閉領域内の所定数の白色画素についての座標を採用してもよい。
Note that the coordinates used when the feature quantity extraction unit 21 obtains the covariance matrix C i may include the coordinates of the pixels on the boundary line of the closed region. The coordinates used when the feature amount extraction unit 21 obtains the covariance matrix C i are predetermined for each of the x-axis direction (horizontal direction) and the y-axis direction (vertical direction) among the white pixels in the closed region. You may employ | adopt the coordinate about the thinned white pixel by the space | interval for every pixel. Further, as the coordinates used when the feature quantity extraction unit 21 obtains the covariance matrix C i , the coordinates of a predetermined number of white pixels in the closed region selected based on random numbers may be adopted.

さらに、本実施形態において、特徴量抽出部21は、数式5及び数式6で定義される隣接領域重心位置niを算出して、これを閉領域iの特徴量として更に抽出する。

Figure 2014106713
Figure 2014106713
ここで、Oiは閉領域iに隣接する領域群、すなわち、境界線の一部を共有する領域群を
示す。ただし、Oiは、背景領域(境界線にフレームの端辺の一部または全部を含む領域
)も含む領域とする。また、αkは、閉領域iの面積である。また、βkは、閉領域i境界線が閉領域kに接する程度を表す、0以上1以下の範囲の割合を示す。また、σ0は、所
定の係数を示す。また、数式5及び数式6では、閉領域kが背景領域である場合、αk
αi、tk=ti、として、隣接領域重心位置niを算出する。なお、本実施形態では、Oi
が背景領域を含むが、Oiが背景領域を含まない実施形態を採用してもよい。 Further, in the present embodiment, the feature extraction unit 21 calculates the neighboring area centroid position n i defined by the equation 5 and equation 6, further extracts it as a feature of the closed region i.
Figure 2014106713
Figure 2014106713
Here, O i indicates a region group adjacent to the closed region i, that is, a region group sharing a part of the boundary line. However, O i is a region including a background region (region including a part or all of the edge of the frame on the boundary line). Α k is the area of the closed region i. Β k represents a ratio in a range from 0 to 1 that represents the degree to which the closed region i boundary line is in contact with the closed region k. Σ 0 represents a predetermined coefficient. In Equations 5 and 6, when the closed region k is a background region, α k =
As α i , t k = t i , the adjacent region centroid position n i is calculated. In this embodiment, O i
May include a background area, but O i may not include a background area.

連続するフレーム間では閉領域の隣接関係が類似している可能性が高い。本実施形態によれば、隣接する閉領域の特徴も加味して、当該閉領域の特徴量が抽出されるため、閉領域どうしの対応付けの精度を高めることができる。   There is a high possibility that the adjacent relationship in the closed region is similar between consecutive frames. According to the present embodiment, since the feature amount of the closed region is extracted in consideration of the feature of the adjacent closed region, it is possible to improve the accuracy of associating the closed regions.

本実施形態において、コスト算出部22は、特徴量に基づいて、2つの閉領域間の相違の程度を示すコストを算出する。具体的には、特徴量抽出部21は、閉領域iと閉領域jとのコストaijを数式7から数式11までを用いて算出する。

Figure 2014106713
Figure 2014106713
Figure 2014106713
Figure 2014106713
Figure 2014106713
In the present embodiment, the cost calculation unit 22 calculates a cost indicating the degree of difference between the two closed regions based on the feature amount. Specifically, the feature quantity extraction unit 21 calculates the cost a ij between the closed region i and the closed region j using Equations 7 to 11.
Figure 2014106713
Figure 2014106713
Figure 2014106713
Figure 2014106713
Figure 2014106713

数式7において、wangle、wscale、wpos、及びwneighborは、各項の重み付けを目
的とした所定の定数である。数式7は、特徴量抽出部21が抽出した慣性主軸の角度の相
違(あるいは座標の分布を代表する楕円間における傾きの相違)の程度を示すaij angle
、固有値の相違(あるいは座標の分布を代表する楕円間の大きさの相違)の程度を示すaij scale、重心の位置の相違の程度を示すaij pos、及び隣接領域重心位置の相違の程度を示すaij neighborの加重和として、コストaijを定義している。コストaijは、閉領域iの特徴量と閉領域jの特徴量とが同一の場合、0となる。なお、重心ti及びtjは、フレームの大きさ(画像サイズ)に合わせて標準化された値として算出されてもよい。このようにすることで、異なる大きさのフレーム間でも好適な対応付けが可能となる。
In Equation 7, w angle , w scale , w pos , and w neighbor are predetermined constants for weighting each term. Equation 7, a ij angle indicating the degree of angular difference of the principal axis of the feature extraction unit 21 has extracted (or slope of the differences between the ellipse representing the distribution of coordinates)
A ij scale indicating the degree of eigenvalue difference (or size difference between ellipses representing the distribution of coordinates), a ij pos indicating the degree of centroid position difference, and the degree of difference between adjacent area centroid positions The cost a ij is defined as a weighted sum of a ij neighbors indicating The cost a ij is 0 when the feature amount of the closed region i and the feature amount of the closed region j are the same. The centroids t i and t j may be calculated as values standardized according to the size of the frame (image size). In this way, it is possible to make a suitable association between frames of different sizes.

なお、コストaijは、aij neighborを考慮せずに、aij angle、aij scale、及びaij pos、の加重平均として定義することを採用してもよい。 The cost a ij may be defined as a weighted average of a ij angle , a ij scale , and a ij pos without considering a ij neighbor .

本実施形態において、閉領域対応付け部23は、線画が有する各閉領域の特徴量に基づいて算出されたコストの総和に基づいて、第1のフレームの線画が有する各閉領域を第2のフレームが有する1以下の閉領域と対応付ける。本実施形態において、第1のフレームと第2のフレームとは、例えばフレームF1とF2とのように、連続するフレームである。   In the present embodiment, the closed region associating unit 23 assigns each closed region included in the line drawing of the first frame to the second based on the total cost calculated based on the feature amount of each closed region included in the line drawing. The frame is associated with a closed area of 1 or less. In the present embodiment, the first frame and the second frame are continuous frames such as frames F1 and F2, for example.

まず、閉領域対応付け部23は、第1のフレームfの線画が有する各閉領域iと、第2のフレームf+1の線画が有する各閉領域jと間のコストaijを要素とする、閉領域の対応付けに関するコスト表を算出する。このコスト表は、第1のフレームfの領域数をNf
、第2のフレームf+1の領域数をNf+1とすると、Nf行Nf+1列の表になる。次に、閉
領域対応付け部23は、このコスト表を参照し、対応付ける閉領域間のコストの総和が最小となるように、各閉領域どうしを対応付ける。ここで、閉領域対応付け部23は、各閉領域iを、最大1つの閉領域jと対付ける。また、閉領域対応付け部23は、Nf>Nf+1の場合は、第1のフレームの線画の閉領域のうち、(Nf−Nf+1)個の閉領域の対応付けを行わない。
First, the closed region association unit 23 uses a cost a ij between each closed region i included in the line drawing of the first frame f and each closed region j included in the line drawing of the second frame f + 1 as an element. A cost table relating to area association is calculated. This cost table shows the number of regions of the first frame f as N f
If the number of areas of the second frame f + 1 is N f + 1 , the table becomes N f rows N f + 1 columns. Next, the closed region association unit 23 refers to this cost table, and associates the closed regions with each other so that the sum of the costs between the closed regions to be associated is minimized. Here, the closed region associating unit 23 associates each closed region i with a maximum of one closed region j. Further, when N f > N f + 1 , the closed region association unit 23 associates (N f −N f + 1 ) closed regions among the closed regions of the line drawing of the first frame. Not performed.

より具体的には、閉領域対応付け部23は、閉領域を点、対応付けを辺とし、各閉領域iと各閉領域jとを部集合とする、2部グラフのマッチングの問題を解く。閉領域対応付け部23は、コストに基づき、ハンガリー法を用いて最大マッチングを求めることで、各領域どうしの対応付けを行う。   More specifically, the closed region association unit 23 solves a bipartite graph matching problem in which the closed region is a point, the association is a side, and each closed region i and each closed region j is a subset. . The closed region association unit 23 associates the regions with each other by obtaining the maximum matching using the Hungarian method based on the cost.

図7は、2画像間での閉領域どうしの対応付けの例を示すイメージ図である。図7には、フレームF1とF2との間での、各閉領域どうしの対応付けが示されている。図7では、対応付けは、各閉領域の重心どうしを線分で結ぶことによって表現されている。例えば、閉領域R101とR201との間、R102とR202との間、及び、R103とR203との間が対応付いている。   FIG. 7 is an image diagram illustrating an example of associating closed regions between two images. FIG. 7 shows the correspondence between the closed regions between the frames F1 and F2. In FIG. 7, the association is expressed by connecting the centroids of the closed regions with line segments. For example, there are correspondences between the closed regions R101 and R201, between R102 and R202, and between R103 and R203.

図8は、複数の画像間での閉領域どうしの対応付けの例を示すイメージ図である。図8には、フレームF1とF2との間、フレームF2とF3との間、及びフレームF3とF4との間で各閉領域どうしの対応付けが示されている。図8において、対応付けは、図7と同様に、各閉領域の重心どうしを線分で結ぶことによって表現されている。   FIG. 8 is an image diagram illustrating an example of associating closed regions between a plurality of images. FIG. 8 shows the correspondence between the closed regions between the frames F1 and F2, between the frames F2 and F3, and between the frames F3 and F4. In FIG. 8, the association is expressed by connecting the centroids of the closed regions with line segments as in FIG.

図8において、フレームF2の閉領域の数は、フレームF3の閉領域の数より多い。そのため、例えば、閉領域対応付け部23は、フレームF2の閉領域R202をフレームF3の閉領域と対応付けない。このことは、フレームF2の線画では、描かれているキャラクタの左耳を構成する閉領域R202が存在しているのに対し、フレームF3の線画では、描かれているキャラクタの左耳が隠れており、左耳を構成する閉領域が存在せず、閉領域R202と一括して同様の処理すべき閉領域がフレームF3に存在しないことを意味する。   In FIG. 8, the number of closed regions of the frame F2 is larger than the number of closed regions of the frame F3. Therefore, for example, the closed region association unit 23 does not associate the closed region R202 of the frame F2 with the closed region of the frame F3. This is because, in the line drawing of the frame F2, the closed region R202 constituting the left ear of the drawn character exists, whereas in the line drawing of the frame F3, the left ear of the drawn character is hidden. This means that there is no closed region constituting the left ear, and no closed region to be processed in the same manner as the closed region R202 exists in the frame F3.

また、図8において、フレームF3の閉領域の数は、フレームF4の閉領域の数より少ない。そのため、閉領域対応付け部23は、例えば、フレームF4の閉領域R404をフレームF3の閉領域と対応付けない。   In FIG. 8, the number of closed regions of the frame F3 is smaller than the number of closed regions of the frame F4. Therefore, for example, the closed region association unit 23 does not associate the closed region R404 of the frame F4 with the closed region of the frame F3.

本実施形態に係るプログラムは、閉領域対応付け部23による対応付けから、連続する複数のフレームにわたる、一連の各閉領域の対応付けの系列(以下「チェーン」という)を導くことができる。閉領域対応付け部23は、フレームF1の閉領域R101とフレームF2の閉領域R201とを対応付け、閉領域R201とフレームF3の閉領域R301とを対応付け、閉領域R301とフレームF4の閉領域R401とを対応付ける。本実施形態に係るプログラムは、これらの対応付けから、閉領域R101、R201、R301、及びR401を対応付けるチェーンを導くことができる。   The program according to the present embodiment can derive a series of associations of closed regions (hereinafter referred to as “chains”) across a plurality of continuous frames from the association by the closed region association unit 23. The closed region association unit 23 associates the closed region R101 of the frame F1 with the closed region R201 of the frame F2, associates the closed region R201 with the closed region R301 of the frame F3, and closes the closed region R301 and the frame F4. R401 is associated. The program according to the present embodiment can derive a chain that associates the closed regions R101, R201, R301, and R401 from these associations.

本実施形態によれば、例えば、ユーザが彩色処理対象の閉領域R101及びその色を指定した場合に、上述のチェーンによって、対応付いた閉領域R101、R201、R301、及びR401を一括して指定された色で彩色することを容易に実現でき、閉領域ごとに彩色する場合に比べ、線画の彩色にかかる労力を削減することが可能になる。   According to the present embodiment, for example, when the user specifies the closed region R101 to be colored and its color, the corresponding closed regions R101, R201, R301, and R401 are collectively specified by the above-described chain. It is possible to easily achieve coloring with the selected color, and it is possible to reduce labor for coloring the line drawing as compared with the case of coloring for each closed region.

<処理の流れ>
図9及び図10のフローチャートを用いて、本実施形態に係るプログラムの処理の流れを説明する。なお、フローチャートに示された処理の具体的な内容および順序は一例であり、処理内容および順序には、実施の形態に適したものが適宜採用されることが好ましい。
<Process flow>
The processing flow of the program according to the present embodiment will be described using the flowcharts of FIGS. 9 and 10. Note that the specific contents and order of the processing shown in the flowchart are examples, and it is preferable that the processing contents and order that are suitable for the embodiment are appropriately adopted.

図9は、複数の画像間で閉領域どうしを対応付ける処理の流れ示すフローチャートである。この処理の流れは、ユーザが、新たにアニメーションを構成する各フレームの線画を情報処理装置1に取り込む操作をしたことを契機に開始される。   FIG. 9 is a flowchart showing a flow of processing for associating closed regions between a plurality of images. The flow of this process is started when the user performs an operation of newly taking the line drawing of each frame constituting the animation into the information processing apparatus 1.

ステップS101では、本実施形態に係るプログラムが、スキャナ2を介して紙媒体の線画3をデジタル画像として取り込むことによって、連続するフレームの線画として、複数の画像を入力する。   In step S <b> 101, the program according to the present embodiment inputs a plurality of images as line images of continuous frames by capturing the line image 3 of the paper medium as a digital image via the scanner 2.

ステップS102では、本実施形態に係るプログラムが、各フレームの線画が有する閉領域を検出する。本実施形態に係るプログラムは、線画内の各画素から、領域の内部と外部との境界となる画素(黒色)をスキャンすることで、閉領域を検出する。この際、本実施形態に係るプログラムは、塗りつぶしのアルゴリズム(シードフィル等)を利用することで、既に検出した閉領域と未検出の閉領域とを区別する。   In step S102, the program according to the present embodiment detects a closed region included in the line drawing of each frame. The program according to the present embodiment detects a closed region by scanning a pixel (black) that is a boundary between the inside and the outside of the region from each pixel in the line drawing. At this time, the program according to the present embodiment distinguishes between the already detected closed region and the undetected closed region by using a paint algorithm (seed fill or the like).

なお、青色、赤色等の黒色以外の画素も境界となる画素として扱ってもよい。また、青色で囲まれる閉領域を、暗い色で彩色処理すべき閉領域として、黒色で囲まれる閉領域とは別に検出してもよい。また、赤色で囲まれる閉領域を、明るい色で彩色処理すべき閉領域として、黒色で囲まれる閉領域とは別に検出してもよい。そして、閉領域対応付け部23は、このように別に検出された赤色で囲まれる閉領域どうしを対応付けてもよい。このようにすることで、明るい色で一括処理して彩色されるべき閉領域どうしを精度良く対応付けることができる。また、閉領域対応付け部23は、このように別に検出された青色で囲まれる閉領域どうしを対応付けてもよい。このようにすることで、暗い色で一括処理して彩色されるべき閉領域どうしを精度良く対応付けることができる。   Note that pixels other than black, such as blue and red, may be treated as pixels serving as boundaries. Alternatively, the closed region surrounded by blue may be detected as a closed region to be colored with a dark color separately from the closed region surrounded by black. Further, a closed region surrounded by red may be detected as a closed region to be colored with a bright color separately from the closed region surrounded by black. Then, the closed region association unit 23 may associate the closed regions surrounded by red separately detected in this way. In this way, it is possible to accurately associate the closed regions to be colored by batch processing with bright colors. Further, the closed region association unit 23 may associate the closed regions surrounded by blue separately detected in this way. In this way, it is possible to accurately associate the closed regions to be colored by batch processing with dark colors.

ステップS103では、特徴量抽出部21が、閉領域の特徴量(慣性主軸の角度、閉領域内の白色画素の座標の各成分から計算される共分散行列の固有値、閉領域内の白色画素
の平均座標、及び隣接領域重心位置)を抽出する。
In step S103, the feature amount extraction unit 21 determines the feature amount of the closed region (inclination principal axis angle, eigenvalue of the covariance matrix calculated from the components of the coordinates of the white pixel in the closed region, the white pixel in the closed region). The average coordinates and the adjacent area centroid position) are extracted.

ステップS104からS107では、全フレーム間の閉領域どうしの対応付けがされる。ステップS104及びS107は、全フレーム間対応付けループを構成する。まず、ステップS104では、本実施形態に係るプログラムが、全フレーム間の閉領域どうしが対応付け済みであるか否かを判定する。ステップS104において、全フレーム間の閉領域どうしが対応付け済みであると判定されなかった場合は、処理は、ステップS105へ進む。ステップS104において、全フレーム間の閉領域どうしが対応付け済みであると判定された場合は、処理は、終了する。次に、ステップS105では、本実施形態に係るプログラムが、次の未処理の2フレーム間を処理対象に設定する。ステップS107では、処理が、ステップS104に戻る。   In steps S104 to S107, the closed regions between all the frames are associated with each other. Steps S104 and S107 constitute an all-frame association loop. First, in step S104, the program according to the present embodiment determines whether or not the closed regions between all the frames have been associated with each other. If it is not determined in step S104 that the closed regions between all the frames have been associated with each other, the process proceeds to step S105. If it is determined in step S104 that the closed regions between all the frames have been associated with each other, the process ends. Next, in step S105, the program according to the present embodiment sets the next unprocessed two frames as a processing target. In step S107, the process returns to step S104.

ステップS105では、処理対象の2フレーム間で各閉領域どうしが対応付けられる。ステップS105の具体的な処理の流れについては、後述する。   In step S105, the closed regions are associated with each other between the two frames to be processed. A specific processing flow of step S105 will be described later.

図10は、2画像間で閉領域どうしを対応付ける処理の流れ示すフローチャートである。この処理の流れは、図9のステップS105の処理を詳しく示したものである。   FIG. 10 is a flowchart showing a flow of processing for associating closed regions between two images. The flow of this process shows the process of step S105 of FIG. 9 in detail.

ステップS201からS204では、コスト算出部22がコスト表を算出する。ステップS201及びS204は、コスト表算出ループを構成する。まず、ステップS201では、コスト算出部22が、全閉領域間のコストが算出済みであるか否かを判定する。ステップS201において、全閉領域間のコストが算出済みであると判定されなかった場合は、処理は、ステップS202へ進む。ステップS201において、全閉領域間のコストが算出済みであると判定された場合は、処理は、ステップS205へ進む。次に、ステップS202では、コスト算出部22が、連続する複数フレームのうち、次の未処理の閉領域間を処理対象に設定する。ステップS204では、処理が、ステップS201に戻る。   In steps S201 to S204, the cost calculation unit 22 calculates a cost table. Steps S201 and S204 constitute a cost table calculation loop. First, in step S201, the cost calculation unit 22 determines whether the cost between the fully closed regions has been calculated. If it is not determined in step S201 that the cost between the fully closed regions has been calculated, the process proceeds to step S202. If it is determined in step S201 that the cost between the fully closed regions has been calculated, the process proceeds to step S205. Next, in step S202, the cost calculation unit 22 sets the next unprocessed closed region as a processing target among a plurality of consecutive frames. In step S204, the process returns to step S201.

ステップS203では、コスト算出部22が、各閉領域の特徴量に基づいて、処理対象の2つの閉領域間のコスト(コスト表)を算出する。   In step S203, the cost calculation unit 22 calculates the cost (cost table) between the two closed regions to be processed based on the feature amount of each closed region.

ステップS205では、閉領域対応付け部23が、コスト算出部22によって算出されたコスト表を参照し、閉領域間のコストの総和が最小になるように、処理対象のフレーム間の各閉領域どうしを対応付ける。   In step S205, the closed region association unit 23 refers to the cost table calculated by the cost calculation unit 22, and each closed region between the frames to be processed is minimized so that the total cost between the closed regions is minimized. Associate.

以上述べたように、本実施形態によれば、複数フレームにわたって、一括して彩色等の処理されるべき閉領域を、好適かつ自動的に対応付けることができる。また、彩色済みの閉領域と未彩色の閉領域とを対応付けることにより、彩色済みの閉領域の色を用いて未彩色の閉領域を自動彩色することも容易となる。なお、一括して行われるべき処理は、彩色に限らず、移動、変形、文字や図形の付加等の処理であってもよい。   As described above, according to the present embodiment, it is possible to appropriately and automatically associate closed regions to be processed for coloring and the like over a plurality of frames. Further, by associating the colored closed region with the achromatic closed region, it becomes easy to automatically color the achromatic closed region using the color of the colored closed region. Note that the processing to be performed collectively is not limited to coloring, but may be processing such as movement, deformation, addition of characters and figures, and the like.

また、本実施形態によれば、各閉領域内の各点の座標の成分を積和演算することで得られる特徴量、又は基準点に対する閉領域内の各点の距離の分布から得られる特徴量に基づいて画像間で閉領域どうしを対応付けるため、複数の画像が色の濃淡情報のない線画等あっても、好適な対応付けができる。また、本実施形態によれば、閉領域全体の形状を考慮した特徴量に基づいて、より好適な閉領域どうしの対応付けができる。また、本実施形態によれば、座標軸に平行ではない方向の凹凸の変化等、複雑な閉領域の形状の特徴を特徴量に反映できる可能性がある。   Further, according to the present embodiment, the feature amount obtained by multiplying the components of the coordinates of each point in each closed region, or the feature obtained from the distribution of the distance of each point in the closed region with respect to the reference point. Since the closed regions are associated with each other based on the amount, even if a plurality of images are line drawings without color shading information, suitable association can be performed. Further, according to the present embodiment, the closed regions can be more preferably associated with each other based on the feature amount considering the shape of the entire closed region. In addition, according to the present embodiment, there is a possibility that complicated features of the shape of the closed region, such as a change in unevenness in a direction not parallel to the coordinate axis, can be reflected in the feature amount.

また、本実施形態によれば、境界線をいくつかに分割した場合の各部の形状のみに基づいて特徴量を算出することと比較し、より閉領域の全体形状を反映した特徴量に基づき、
閉領域どうしを対応付けることができる。例えば、境界線について凹凸の変化が大きい場合であっても、好適な対応付けができる。また、本実施形態によれば、各フレームに複数の剛体を設定し、設定した剛体の回転や平行移動等の変形量によって、各フレーム間の閉領域(剛体)を対応付ける場合と比べ、フレーム間で境界線が複雑な変化をしている閉領域どうしも対応付きやすくなる。
Further, according to the present embodiment, compared to calculating the feature amount based only on the shape of each part when dividing the boundary line into several, based on the feature amount more reflecting the overall shape of the closed region,
Closed areas can be associated with each other. For example, a suitable association can be made even when the unevenness of the boundary line is large. In addition, according to the present embodiment, a plurality of rigid bodies are set in each frame, and compared with the case where the closed region (rigid body) between the frames is associated with the amount of deformation such as rotation or parallel movement of the set rigid bodies. It becomes easy to deal with closed areas where the boundary line changes complicatedly.

また、本実施形態によれば、コストの総和に基づいて、第1の画像(フレーム)が有する各閉領域を、第2の画像(フレーム)が有する1つ以下の閉領域と対応付けられるため、単純に画像間の2つの閉領域の相違に基づいて、閉領域どうしを対応付ける場合と比較し、画像間全体として精度良く、各閉領域どうしを対応付けることができる。   Further, according to the present embodiment, each closed region included in the first image (frame) is associated with one or less closed regions included in the second image (frame) based on the total cost. As compared with the case where the closed regions are associated with each other simply based on the difference between the two closed regions between the images, the closed regions can be associated with each other with high accuracy as a whole.

≪実施形態2≫
上述の実施形態1では、特徴量抽出部21が、閉領域内の各点の座標の成分を積和演算することで得られる特徴量として、閉領域の慣性主軸の角度等を抽出した。以下に説明する実施形態2では、特徴量抽出部21が、基準点に対する閉領域内の各点の距離の分布から得られる特徴量として、閉領域の境界線上を動く動点の始点からの道のりを変数とする、当該動点の基準点からの距離の関数を求め、求めた当該関数に対してフーリエ変換をした結果から、所定よりも低周波数の成分を抽出する。なお、本実施形態に係るプログラムが実行される情報処理装置1の構成、情報処理装置1の機能である閉領域対応付け部23、及び処理の流れは、実施形態1と同様である。
<< Embodiment 2 >>
In the above-described first embodiment, the feature amount extraction unit 21 extracts the angle of the inertia main axis of the closed region as the feature amount obtained by multiplying the components of the coordinates of each point in the closed region. In the second embodiment to be described below, the feature amount extraction unit 21 uses the distance from the start point of the moving point that moves on the boundary line of the closed region as the feature amount obtained from the distance distribution of each point in the closed region with respect to the reference point. A function of the distance of the moving point from the reference point is obtained, and a component having a frequency lower than a predetermined value is extracted from the result of Fourier transform of the obtained function. The configuration of the information processing apparatus 1 that executes the program according to the present embodiment, the closed region association unit 23 that is a function of the information processing apparatus 1, and the flow of processing are the same as those in the first embodiment.

本実施形態においては、基準点として、閉領域の重心が採用される。また、距離としては、ユークリッド距離が採用される。また、動点の始点としては、境界線上の点のうち、重心からx軸方向に位置し、かつ最も重心に近い点が採用される。また、動点は、始点から反時計回り動き、境界線上を1周する。   In the present embodiment, the center of gravity of the closed region is adopted as the reference point. Further, the Euclidean distance is adopted as the distance. As the starting point of the moving point, a point located in the x-axis direction from the center of gravity and closest to the center of gravity among the points on the boundary line is employed. The moving point moves counterclockwise from the starting point and makes one round on the boundary line.

図11は、閉領域に係る動点の例を示す図である。図11には、閉領域R101に対す
る動点M101が示されている。図11では、点P100が始点である。まず、特徴量抽出部21は、始点P100からの道のりを変数とする、動点M101と重心C101との距離の関数を求める。
FIG. 11 is a diagram illustrating an example of a moving point related to the closed region. FIG. 11 shows a moving point M101 for the closed region R101. In FIG. 11, the point P100 is the starting point. First, the feature quantity extraction unit 21 obtains a function of the distance between the moving point M101 and the center of gravity C101 using the distance from the start point P100 as a variable.

図12は、動点と重心との距離の関数の例を示すグラフである。図12には、横軸を始点からの動点までの道のりl、縦軸を重心と動点との距離とした、関数c(l)のグラフ
曲線が示されている。このグラフは、図11の閉領域R101の例と対応している。
FIG. 12 is a graph showing an example of a function of the distance between the moving point and the center of gravity. FIG. 12 shows a graph curve of the function c (l) in which the horizontal axis is the path l from the start point to the moving point, and the vertical axis is the distance between the center of gravity and the moving point. This graph corresponds to the example of the closed region R101 in FIG.

次に、特徴量抽出部21は、関数c(l)を所定より低い次数(例えば16次)までフーリエ級数展開し、フーリエ記述子で記述した各項の係数を要素とした係数ベクトルを特徴量として抽出する。換言すると、所定よりも低周波数の成分を抽出したことになる。ここで、本実施形態における第n項のフーリエ記述子の係数fnは、次数N、始点からの動
点までの道のりl、境界線の長さをL、重心と動点との距離を示す関数c(l)、及び虚
数単位iを用いて、数式12で定義される。

Figure 2014106713
Next, the feature quantity extraction unit 21 performs a Fourier series expansion of the function c (l) to a lower order (for example, the 16th order) than a predetermined level, and uses a coefficient vector having the coefficient of each term described by the Fourier descriptor as the feature quantity. Extract as In other words, a component having a frequency lower than a predetermined value is extracted. Here, the coefficient f n of the Fourier descriptor of the n-th term in the present embodiment indicates the degree N, the path l from the start point to the moving point, the boundary line length L, and the distance between the center of gravity and the moving point. Using the function c (l) and the imaginary unit i, it is defined by Equation 12.
Figure 2014106713

本実施形態において、コスト算出部22は、特徴量である、複数の係数fnを要素とす
る係数ベクトルに基づいて、2つの閉領域間の相違の程度を示すコストを算出する。本実
施形態において、コスト算出部22は、2つの閉領域の係数ベクトルのユークリッド距離をコストとして算出する。なお、コストの算出方法は、例えば、係数の次数によって加重されて算出される等、その他の方法が採用されてもよい。また、コスト算出部22は、係数ベクトルから算出したコストと、実施形態1で説明した隣接領域重心位置の相違の程度を示すaij neighborとの加重和を算出し、算出した加重和をコストとして採用してもよい。
In the present embodiment, the cost calculation unit 22 calculates a cost indicating a degree of difference between two closed regions based on a coefficient vector having a plurality of coefficients f n as elements. In the present embodiment, the cost calculation unit 22 calculates the Euclidean distance between the coefficient vectors of the two closed regions as a cost. Note that other methods may be employed as the cost calculation method, for example, calculation is performed by weighting with the order of the coefficient. Further, the cost calculation unit 22 calculates a weighted sum of the cost calculated from the coefficient vector and a ij neighbor indicating the degree of difference between the adjacent region centroid positions described in the first embodiment, and uses the calculated weighted sum as the cost. It may be adopted.

本実施形態によれば、係数ベクトルで閉領域の境界線を近似でき、次数を増やすほど高精度に境界線を近似する特徴量を抽出することが可能になる。   According to the present embodiment, the boundary line of the closed region can be approximated with a coefficient vector, and the feature quantity that approximates the boundary line can be extracted with higher accuracy as the degree is increased.

≪実施形態3≫
実施形態3では、特徴量抽出部21は、基準点に対する閉領域内の各点の距離の分布から得られる特徴量として、閉領域の境界線上の複数の点を所定の基準で選択し、選択した各点と基準点との距離を成分とするベクトルを抽出する。なお、本実施形態に係るプログラムが実行される情報処理装置1の構成、及び情報処理装置1の機能である閉領域対応付け部23、及び処理の流れは、実施形態1と同様である。
<< Embodiment 3 >>
In the third embodiment, the feature amount extraction unit 21 selects a plurality of points on the boundary line of the closed region as a feature amount obtained from the distance distribution of each point in the closed region with respect to the reference point, and selects the selected point. A vector whose component is the distance between each point and the reference point is extracted. The configuration of the information processing apparatus 1 that executes the program according to the present embodiment, the closed region association unit 23 that is a function of the information processing apparatus 1, and the flow of processing are the same as those in the first embodiment.

本実施形態においては、基準点として、閉領域の重心が採用される。また、本実施形態に係るプログラムは、実施形態2と同様に、閉領域に対して始点を定め、始点からの境界線に沿った道のりを取り扱う。まず、特徴量抽出部21は、境界線の長さをLとしたときに、境界線上の点であって、始点からの道のりが0、L/16、2L/16、3L/16、…、15L/16となる16点を選択する(境界線上の複数の点を所定の基準で選択することに相当する)。換言すると、特徴量抽出部21は、境界線を16等分する点を選択することになる。次に、特徴量抽出部21は、選択した各点と重心との距離を成分とするベクトルを、閉領域の特徴量として抽出する。   In the present embodiment, the center of gravity of the closed region is adopted as the reference point. In addition, the program according to the present embodiment defines a starting point for the closed region and handles a road along the boundary line from the starting point, as in the second embodiment. First, the feature quantity extraction unit 21 is a point on the boundary line when the length of the boundary line is L, and the path from the start point is 0, L / 16, 2L / 16, 3L / 16,. 16 points to be 15L / 16 are selected (corresponding to selecting a plurality of points on the boundary line with a predetermined reference). In other words, the feature amount extraction unit 21 selects points that divide the boundary line into 16 equal parts. Next, the feature quantity extraction unit 21 extracts a vector whose component is the distance between each selected point and the center of gravity as a feature quantity of the closed region.

図13は、選択した各点と重心との距離を示すイメージ図である。図13には、図11の閉領域R101の例に対応する選択した各点と重心との距離のイメージが示されている。図13において、横軸は点からの動点までの道のりlを示し、縦軸は選択した各点と重
心との距離を示す。図13において、点L01からL16までは、それぞれ、始点からの道のりが0、L/16、2L/16、3L/16、・・・・、15L/16となる、各点の道のりlと重心との距離の関係を示す点である。特徴量抽出部21は、特徴量として、点L01からL16までの縦座標が示す、重心との距離を要素とするベクトルを抽出する。
FIG. 13 is an image diagram showing the distance between each selected point and the center of gravity. FIG. 13 shows an image of the distance between each selected point and the center of gravity corresponding to the example of the closed region R101 of FIG. In FIG. 13, the horizontal axis indicates the distance l from the point to the moving point, and the vertical axis indicates the distance between each selected point and the center of gravity. In FIG. 13, from the points L01 to L16, the path from the start point is 0, L / 16, 2L / 16, 3L / 16,..., 15L / 16, and the path l and the center of gravity of each point. It is a point which shows the relationship of distance. The feature amount extraction unit 21 extracts a vector having a distance from the center of gravity indicated by the ordinates from the points L01 to L16 as an element as a feature amount.

なお、本実施形態において、特徴量抽出部21は、16点を選択したが、より多くの点を選択すること採用してもよいし、より少ない点を選択することを採用してもよい。   In the present embodiment, the feature amount extraction unit 21 selects 16 points. However, the feature amount extraction unit 21 may select more points or may select fewer points.

本実施形態において、コスト算出部22は、実施形態2と同様に、特徴量として抽出されたベクトルのユークリッド距離をコストとして算出する。   In the present embodiment, the cost calculation unit 22 calculates the Euclidean distance of the vector extracted as the feature amount as the cost, as in the second embodiment.

≪実施形態4≫
実施形態4では、特徴量抽出部21は、基準点に対する閉領域内の各点の距離の分布から得られる特徴量として、基準点を原点とした閉領域内の各点の極座標における動径と偏角とを変量とする、2変量のヒストグラムを抽出する。なお、本実施形態に係るプログラムが実行される情報処理装置1の構成、及び情報処理装置1の機能である閉領域対応付け部23、及び処理の流れは、実施形態1と同様である。
<< Embodiment 4 >>
In the fourth embodiment, the feature quantity extraction unit 21 calculates, as the feature quantity obtained from the distribution of the distance of each point in the closed area with respect to the reference point, the radius vector in the polar coordinates of each point in the closed area with the reference point as the origin. A bivariate histogram with the declination as a variable is extracted. The configuration of the information processing apparatus 1 that executes the program according to the present embodiment, the closed region association unit 23 that is a function of the information processing apparatus 1, and the flow of processing are the same as those in the first embodiment.

本実施形態において、基準点として閉領域の重心が採用される。また、2変量のヒストグラムとしては、Shape Contextが採用される。コスト算出部22は、カイ
二乗を用いて、特徴量であるヒストグラムの類似度を、2つの閉領域間の相違の程度を示すコストを算出する。
In the present embodiment, the center of gravity of the closed region is adopted as the reference point. As the bivariate histogram, Shape Context is adopted. The cost calculation unit 22 uses chi-square to calculate the similarity between the histograms that are feature quantities and the cost that indicates the degree of difference between the two closed regions.

≪その他の実施形態≫ << Other Embodiments >>

また、その他の実施形態として、各フレームの線画をベクタ画像として扱ってもよい。この際、特徴量抽出部21は、閉領域iの面積をA、閉領域i内の点をxとして、重心ti
及び共分散行列Ciを数式13によって算出してもよい。

Figure 2014106713
そして、特徴量抽出部21は、この共分散行列Ciの固有値及び固有ベクトルを算出し、
固有ベクトルから慣性主軸の角度を、閉領域内の各点の座標の成分を積和演算することで得られる特徴量として、抽出してもよい。また、この実施形態では、特徴量抽出部21は、共分散行列Ciの固有ベクトルから慣性主軸の角度を求めたが、特徴量抽出部21は、
閉領域i内の点をxのx成分、y成分をxx、yy、tiのx成分、y成分をtix、tiyとして、数式14により、慣性主軸の角度θを抽出してもよい。
Figure 2014106713
As another embodiment, the line drawing of each frame may be handled as a vector image. At this time, the feature quantity extraction unit 21 sets the area of the closed region i as A, the point in the closed region i as x, and the center of gravity t i.
And the covariance matrix C i may be calculated by Equation 13.
Figure 2014106713
Then, the feature quantity extraction unit 21 calculates eigenvalues and eigenvectors of the covariance matrix C i ,
You may extract the angle of an inertia principal axis from an eigenvector as a feature-value obtained by multiply-adding the component of the coordinate of each point in a closed region. In this embodiment, the feature quantity extraction unit 21 calculates the angle of the principal axis of inertia from the eigenvector of the covariance matrix C i , but the feature quantity extraction unit 21
The point θ in the closed region i is x, the y component is x x , y y , the x component of t i , the y component is t ix , t iy , and the angle θ of the inertial main axis is extracted using Equation 14 Also good.
Figure 2014106713

このようにすることで、ベクタ画像について、フレーム間の各閉領域どうしを好適に対応付けることができる。また、各フレームの線画は、ベクタ画像をラスタ画像に変換して取り扱ってもよい。   By doing so, it is possible to suitably associate the closed regions between the frames with respect to the vector image. The line drawing of each frame may be handled by converting a vector image into a raster image.

1 情報処理装置
2 スキャナ
3 線画
21 特徴量抽出部
22 コスト算出部
23 閉領域対応付け部
DESCRIPTION OF SYMBOLS 1 Information processing apparatus 2 Scanner 3 Line drawing 21 Feature-value extraction part 22 Cost calculation part 23 Closed area matching part

Claims (10)

コンピュータによって、閉領域を有する複数の画像間で前記閉領域どうしを対応付けるプログラムであって、
前記閉領域の特徴量であって、前記閉領域内の各点の座標の成分を積和演算することで得られる特徴量、又は基準点に対する前記閉領域内の各点の距離の分布から得られる特徴量を抽出する特徴量抽出工程と、
前記特徴量抽出工程において抽出された、前記画像が有する各閉領域の前記特徴量に基づいて、第1の画像が有する各閉領域と第2の画像が有する各閉領域とを対応付ける閉領域対応付け工程と、
を備えるプログラム。
A program for associating the closed areas between a plurality of images having closed areas by a computer,
The feature amount of the closed region, which is obtained from a product sum operation of the components of the coordinates of each point in the closed region, or obtained from the distribution of the distance of each point in the closed region with respect to a reference point. A feature amount extraction step for extracting a feature amount to be obtained;
Closed region correspondence that associates each closed region of the first image with each closed region of the second image based on the feature amount of each closed region of the image extracted in the feature amount extraction step. Attaching process,
A program comprising
請求項1における前記特徴量抽出工程において、前記閉領域内の各点の座標の成分を積和演算することで得られる特徴量として、前記閉領域の慣性主軸の角度を抽出するプログラム。   The program for extracting the angle of the inertial principal axis of the closed region as a feature amount obtained by performing a product-sum operation on the coordinate components of each point in the closed region. 前記特徴量抽出工程において、前記閉領域内の各点の座標の成分を積和演算することで得られる特徴量を、前記閉領域内の各点の座標の各成分から計算される共分散行列の固有値及び固有ベクトルに基づいて抽出する、
請求項1又は2に記載のプログラム。
In the feature quantity extraction step, a feature quantity obtained by multiply-adding the components of the coordinates of each point in the closed region is calculated from the covariance matrix calculated from the components of the coordinates of each point in the closed region Extract based on the eigenvalues and eigenvectors of
The program according to claim 1 or 2.
請求項1における前記特徴量抽出工程において、基準点に対する前記閉領域内の各点の距離の分布から得られる特徴量として、前記閉領域の境界線上を動く動点の始点からの道のりを変数とする、該動点の基準点からの距離の関数を求め、求めた該関数に対してフーリエ変換をした結果から、所定よりも低周波数の成分を抽出するプログラム。   In the feature amount extraction step according to claim 1, as a feature amount obtained from a distribution of distances of each point in the closed region with respect to a reference point, a path from a starting point of a moving point moving on a boundary line of the closed region is used as a variable. A program for obtaining a function of a distance from the reference point of the moving point, and extracting a component having a frequency lower than a predetermined value from a result of performing Fourier transform on the obtained function. 請求項1における前記特徴量抽出工程において、基準点に対する前記閉領域内の各点の距離の分布から得られる特徴量として、前記閉領域の境界線上の複数の点を所定の基準で選択し、選択した各点と基準点との距離を成分とするベクトルを抽出するプログラム。   In the feature amount extraction step according to claim 1, a plurality of points on a boundary line of the closed region are selected based on a predetermined reference as a feature amount obtained from a distribution of distances of the points in the closed region with respect to a reference point, A program that extracts a vector whose component is the distance between each selected point and a reference point. 請求項1における前記特徴量抽出工程において、基準点に対する前記閉領域内の各点の距離の分布から得られる特徴量として、基準点を原点とした前記閉領域内の各点の極座標における動径と偏角とを変量とする、2変量のヒストグラムを抽出するプログラム。   In the feature amount extraction step according to claim 1, as a feature amount obtained from a distribution of distances of each point in the closed region with respect to a reference point, a radius vector in a polar coordinate of each point in the closed region with the reference point as an origin. A program that extracts a bivariate histogram with the angle and declination as variables. 前記特徴量抽出工程において、前記閉領域の特徴量として、特徴量の抽出対象の閉領域に隣接する各閉領域の重心に基づいて算出される座標である隣接領域重心位置を、更に抽出する、
請求項1から6の何れか1項に記載のプログラム。
In the feature amount extraction step, as the feature amount of the closed region, an adjacent region centroid position, which is a coordinate calculated based on the centroid of each closed region adjacent to the closed region of the feature amount extraction target, is further extracted.
The program according to any one of claims 1 to 6.
前記特徴量抽出工程において抽出された前記特徴量に基づいて、2つの閉領域間の相違の程度を示すコストを算出するコスト算出工程を更に備え、
前記閉領域対応付け工程は、対応付ける各閉領域間の前記コストの総和に基づいて、第1の画像が有する各閉領域を、第2の画像が有する1つ以下の前記閉領域と対応付ける、
請求項1から7の何れか1項に記載のプログラム。
A cost calculation step of calculating a cost indicating a degree of difference between the two closed regions based on the feature amount extracted in the feature amount extraction step;
The closed region association step associates each closed region of the first image with one or less of the closed regions of the second image based on the sum of the costs between the closed regions to be associated.
The program according to any one of claims 1 to 7.
閉領域を有する複数の画像間で前記閉領域どうしを対応付ける方法であって、
コンピュータによって、
前記閉領域の特徴量であって、前記閉領域の各点の座標の成分を積和演算することで得られる特徴量、又は基準点に対する前記閉領域内の各点の距離の分布から得られる特徴量を抽出する特徴量抽出工程と、
前記特徴量抽出工程において抽出された、前記画像が有する各閉領域の前記特徴量に基づいて、第1の画像が有する各閉領域と第2の画像が有する各閉領域とを対応付ける閉領域対応付け工程と、
が実行される方法。
A method of associating the closed regions between a plurality of images having closed regions,
By computer
A feature amount of the closed region, which is obtained from a feature amount obtained by multiplying the components of the coordinates of each point of the closed region, or a distribution of distances of the points in the closed region with respect to a reference point A feature extraction process for extracting features;
Closed region correspondence that associates each closed region of the first image with each closed region of the second image based on the feature amount of each closed region of the image extracted in the feature amount extraction step. Attaching process,
How is executed.
閉領域を有する複数の画像間で前記閉領域どうしを対応付ける情報処理装置であって、
前記閉領域の特徴量であって、前記閉領域内の各点の座標の成分を積和演算することで得られる特徴量、又は基準点に対する前記閉領域内の各点の距離の分布から得られる特徴量を抽出する抽出手段と、
前記抽出手段によって抽出された、前記画像が有する各閉領域の前記特徴量に基づいて、第1の画像が有する各閉領域と第2の画像が有する各閉領域とを対応付ける閉領域対応付け手段と、
を備える情報処理装置。
An information processing apparatus for associating the closed areas between a plurality of images having closed areas,
The feature amount of the closed region, which is obtained from a product sum operation of the components of the coordinates of each point in the closed region, or obtained from the distribution of the distance of each point in the closed region with respect to a reference point. Extracting means for extracting a feature amount to be obtained;
Closed area association means for associating each closed area of the first image with each closed area of the second image based on the feature amount of each closed area of the image extracted by the extraction means. When,
An information processing apparatus comprising:
JP2012258792A 2012-11-27 2012-11-27 Program, method, and information processor Pending JP2014106713A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2012258792A JP2014106713A (en) 2012-11-27 2012-11-27 Program, method, and information processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2012258792A JP2014106713A (en) 2012-11-27 2012-11-27 Program, method, and information processor

Publications (1)

Publication Number Publication Date
JP2014106713A true JP2014106713A (en) 2014-06-09

Family

ID=51028149

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2012258792A Pending JP2014106713A (en) 2012-11-27 2012-11-27 Program, method, and information processor

Country Status (1)

Country Link
JP (1) JP2014106713A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017199280A (en) * 2016-04-28 2017-11-02 株式会社セルシス Method of associating closed area of each of at least two images including plurality of closed areas, and program
JP6283083B1 (en) * 2016-10-18 2018-02-21 株式会社セルシス Method and program for associating regions existing in two images
JP2019096977A (en) * 2017-11-21 2019-06-20 富士通株式会社 Visualization method, visualization device and visualization program
JP2021033686A (en) * 2019-08-26 2021-03-01 株式会社セルシス Image area extraction processing method and image area extraction processing program
KR20220135760A (en) * 2021-03-31 2022-10-07 서울대학교산학협력단 Apparatus and method for image matching based on matching point

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017199280A (en) * 2016-04-28 2017-11-02 株式会社セルシス Method of associating closed area of each of at least two images including plurality of closed areas, and program
JP6283083B1 (en) * 2016-10-18 2018-02-21 株式会社セルシス Method and program for associating regions existing in two images
JP2019096977A (en) * 2017-11-21 2019-06-20 富士通株式会社 Visualization method, visualization device and visualization program
JP7062923B2 (en) 2017-11-21 2022-05-09 富士通株式会社 Visualization method, visualization device and visualization program
JP2021033686A (en) * 2019-08-26 2021-03-01 株式会社セルシス Image area extraction processing method and image area extraction processing program
KR20220135760A (en) * 2021-03-31 2022-10-07 서울대학교산학협력단 Apparatus and method for image matching based on matching point
KR102624308B1 (en) 2021-03-31 2024-01-15 서울대학교산학협력단 Apparatus and method for image matching based on matching point

Similar Documents

Publication Publication Date Title
US10719937B2 (en) Automated detection and trimming of an ambiguous contour of a document in an image
CN107680054B (en) Multi-source image fusion method in haze environment
US9922443B2 (en) Texturing a three-dimensional scanned model with localized patch colors
CN111275129A (en) Method and system for selecting image data augmentation strategy
Jiao et al. Local stereo matching with improved matching cost and disparity refinement
US8494297B2 (en) Automatic detection and mapping of symmetries in an image
CN104966285B (en) A kind of detection method of salient region
US20100008576A1 (en) System and method for segmentation of an image into tuned multi-scaled regions
JP5965050B2 (en) Pixel scoring and adjustment based on neighborhood relations to reveal data in images
US11238301B2 (en) Computer-implemented method of detecting foreign object on background object in an image, apparatus for detecting foreign object on background object in an image, and computer-program product
WO2015092904A1 (en) Image-processing apparatus, image-processing method, and image-processing program
CN110268442B (en) Computer-implemented method of detecting a foreign object on a background object in an image, device for detecting a foreign object on a background object in an image, and computer program product
CN113436080B (en) Seal image processing method, device, equipment and storage medium
CN110544300B (en) Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics
JP2014106713A (en) Program, method, and information processor
Kang et al. An adaptive fusion panoramic image mosaic algorithm based on circular LBP feature and HSV color system
JP6132485B2 (en) Image processing apparatus, operation method of image processing apparatus, and image processing program
CN117456376A (en) Remote sensing satellite image target detection method based on deep learning
US10893167B2 (en) Extracting a document page image from a electronically scanned image having a non-uniform background content
CN109448010B (en) Automatic four-side continuous pattern generation method based on content features
CN116798041A (en) Image recognition method and device and electronic equipment
JP6546385B2 (en) IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND PROGRAM
CN115601616A (en) Sample data generation method and device, electronic equipment and storage medium
Nagar et al. Symmmap: Estimation of the 2-d reflection symmetry map and its applications
Gao et al. Saliency-seeded localizing region-based active contour for automatic natural object segmentation