WO2011128933A1 - Image vectorization device, image vectorization method, and image vectorization program - Google Patents

Image vectorization device, image vectorization method, and image vectorization program Download PDF

Info

Publication number
WO2011128933A1
WO2011128933A1 PCT/JP2010/002671 JP2010002671W WO2011128933A1 WO 2011128933 A1 WO2011128933 A1 WO 2011128933A1 JP 2010002671 W JP2010002671 W JP 2010002671W WO 2011128933 A1 WO2011128933 A1 WO 2011128933A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
vectorization
vector
raster
partial region
Prior art date
Application number
PCT/JP2010/002671
Other languages
French (fr)
Japanese (ja)
Inventor
田口進也
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2010/002671 priority Critical patent/WO2011128933A1/en
Publication of WO2011128933A1 publication Critical patent/WO2011128933A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/20Contour coding, e.g. using detection of edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/469Contour-based spatial representations, e.g. vector-coding
    • G06V10/473Contour-based spatial representations, e.g. vector-coding using gradient analysis

Definitions

  • the present invention relates to, for example, an image vectorization apparatus, an image vectorization method, and an image vectorization program for converting a raster image into a vector image format.
  • the “vector image” of the present invention refers to an image in which an image is represented by dots, lines, and surfaces, and color information is represented by a parametric equation.
  • point refers to a two-dimensional coordinate value, a three-dimensional coordinate value, or a more generally expressed N-dimensional coordinate value.
  • a “line” refers to a straight line or a curve connecting two points.
  • the “surface” refers to a region surrounded by a plurality of lines.
  • a “raster image” refers to an image represented by a set of colored points.
  • image vectorization refers to converting a raster image into a vector image.
  • FIG. 13 shows an example of image vectorization by the method of Non-Patent Document 1.
  • the horizontal axis represents image coordinates
  • the vertical axis represents pixel values (color information).
  • the two-dimensional image coordinates are expressed by one-dimensional image coordinates.
  • FIG. 13A when there is no noise in the pixel value of the raster image, image vectorization is realized by approximating each pixel value with a smooth bicubic surface L.
  • a portion surrounded by a broken line in FIG. 13A is a portion where the color changes greatly, and corresponds to, for example, an edge (contour) of an object.
  • Non-Patent Document 1 image vectorization is performed so as to faithfully reproduce the input raster image. Therefore, even noise existing in the raster image is faithfully reproduced. There was a problem of end. For this reason, for example, as shown in FIG. 13B, when noise is added to the raster image, even the pixel value Z of the noise is tried to be faithfully reproduced. The approximate curved surface L ′ has been reproduced.
  • Non-Patent Document 1 it has been a premise for image vectorization that the edge of an object is clearly imaged. Therefore, even in the case of a raster image in which the edge of an object is blurred, there is no processing for emphasizing the edge and converting it to an image vector.
  • the present invention has been made to solve the above-described problems.
  • An object is to obtain a conversion method and an image vectorization program.
  • An image vectorization apparatus includes an image vectorization learning unit that converts a sample raster image into a vector image according to a predetermined method, and an image vectorization pattern that holds a pair of the sample raster image and the vector image
  • an image vectorization execution unit that converts an input raster image into a vector image using a sample raster image and a pair of the vector images held in the image vectorization pattern database are provided.
  • the image vectorization method of the present invention uses an image vectorization learning step for converting a sample raster image into a vector image according to a predetermined method, and a sample raster image and a vector image pair. And an image vectorization execution step for converting the raster image into a vector image.
  • the image vectorization program uses an image vectorization learning procedure for converting a sample raster image into a vector image according to a predetermined method, and uses a sample raster image and a vector image pair.
  • An image vectorization execution procedure for converting the raster image into a vector image is executed by a computer.
  • the image vectorization is automatically performed on the raster image including the noise without being affected by the noise. Can be implemented.
  • FIG. 6 is a flowchart illustrating an operation of an image vectorization learning unit of the image vectorization apparatus according to Embodiment 1;
  • FIG. 3A shows a conversion process of the image vectorization learning unit,
  • FIG. 3A shows a raster image before conversion,
  • FIG. 3B shows a vector image after conversion.
  • It is a figure which shows an example of the local coordinate system defined to the image.
  • It is a figure which shows the example which image-converts the raster image to which noise was added by the image vectorization learning part.
  • It is a block diagram which shows the structure of an image vectorization pattern database.
  • FIGS. 12A and 12B illustrate an image vectorization method according to Non-Patent Document 1.
  • FIG. 12A shows a case of a raster image without noise
  • FIG. 12B shows a case of a raster image containing noise.
  • FIG. 1 An image vectorization apparatus 1 shown in FIG. 1 includes a raster image database 2 that stores a plurality of raster images as samples for learning a strategy for image vectorization, and an image vectorization that converts sample raster images into vector images.
  • Learning unit 3 image vectorization pattern database 4 for storing a pair of sample raster image and vector image, and image vectorization execution for converting a raster image input using image vectorization pattern database 4 into a vector image It consists of part 5.
  • each of the raster image database 2, the image vectorization learning unit 3, the image vectorization pattern database 4, and the image vectorization execution unit 5, which are components of the image vectorization device 1, A semiconductor integrated circuit board on which an MPU (Micro Processing Unit) is mounted is assumed.
  • the image vectorization apparatus 1 may be configured by a computer. In this case, the processing contents of the raster image database 2, the image vectorization learning unit 3, the image vectorization pattern database 4, and the image vectorization execution unit 5 are described.
  • the image vectorization program stored in the memory is stored in the memory of the computer, and the CPU (Central Processing Unit) of the computer executes the image vectorization program stored in the memory.
  • step ST1 the image vectorization learning unit 3 acquires a plurality of raster images stored in the raster image database 2, and converts the acquired raster images into vector images.
  • FIG. 3 is a diagram for explaining the conversion process of the image vectorization learning unit 3.
  • FIG. 3A shows the raster image I before conversion
  • FIG. 3B shows the image vector V after conversion.
  • the image vectorization learning unit 3 acquires a raster image I having a size of N ⁇ M pixels from the raster image database 2 and converts it into a vector image V using an existing image processing function or the like according to an operator's instruction. By doing so, the image vectorization strategy is learned in advance. For example, since the raster image I in FIG. 3 is blurred, there is ambiguity in the edge position.
  • the raster image I is converted into a vector image V composed of a plane A1 composed of points P1, P2, P3, and P4 and a plane A2 composed of points P2, P5, P3, and P6.
  • a plane A1 composed of points P1, P2, P3, and P4
  • a plane A2 composed of points P2, P5, P3, and P6.
  • FIG. 4 is a diagram showing an example of the local coordinate system defined in the image, and shows the local coordinate system (U, V) of the plane A1 with the point P1 as the origin P0 (0, 0).
  • u and v are real parameters taking values between 0 and 1
  • information such as color and luminance at an arbitrary point P (u, v) in the plane A1 can be expressed by an arbitrary parametric function F (u, v).
  • a, b, and c are constants and may be arbitrarily specified by the user.
  • the image vectorization learning unit 3 may generate a vector image by removing noise from the input raster image based on an operator's judgment or the like. Thereby, the noise level can be learned in advance.
  • FIG. 5 shows an example in which a raster image to which noise is added is converted into an image vector.
  • the horizontal axis represents image coordinates
  • the vertical axis represents pixel values (luminance information).
  • the two-dimensional image coordinates are expressed by one-dimensional image coordinates.
  • the image vectorization learning unit 3 calculates an image feature amount from the sample raster image acquired from the raster image database 2.
  • the image feature amount is arbitrary, such as a luminance value included in the raster image, a color histogram, an edge histogram, a color-edge correlation function, or the like.
  • local feature amounts of raster images such as HOG (Histogram of Oriented Gradients) and SIFT (Scale Invariant Feature Transform) may be calculated.
  • the image vectorization learning unit 3 records in the image vectorization pattern database 4 a pair of the image feature amount extracted from the sample raster image and the vector image obtained by converting the raster image into an image vector.
  • FIG. 6 is a block diagram showing the configuration of the image vectorization pattern database 4.
  • the image vectorization learning unit 3 converts the sample raster image of the raster image database 2 into an image vector and stores it in the image vectorization pattern database 4, thereby including noise and edge determination. Learn information on image vectorization strategies.
  • step ST11 the image vectorization execution unit 5 acquires a raster image as an input.
  • the raster image to be acquired may have an arbitrary size.
  • the image vectorization execution unit 5 divides the acquired raster image into partial regions.
  • FIG. 8A shows an example of a raster image I acquired by the image vectorization execution unit 5
  • FIG. 8B shows an example of partial area division.
  • the image vectorization execution unit 5 equally divides the input raster image I into fine rectangular areas. Note that the size and shape of the partial area are arbitrary, and may be any size as long as the image feature amount can be acquired from the partial area.
  • the image vectorization execution unit 5 calculates each image feature amount from each partial region of the raster image. For example, as shown in FIG. 8C, the image vectorization execution unit 5 calculates an image feature amount Ki from the i-th partial region Ri.
  • the image feature amount Ki the same information as the sample image feature amount Fj calculated in step ST2 described above is calculated.
  • the image vectorization execution unit 5 repeats the same processing for all the partial areas of the raster image I.
  • the image vectorization execution unit 5 searches the image vectorization pattern database 4 using the image feature amount of each partial region, and uses a plurality of sample vector images suitable for each partial region as candidate vector images. Extraction is performed, and the candidate probability of each candidate vector image is calculated.
  • the vector image Vj is acquired from the image vectorization pattern database 4 and is set as a candidate vector image of the partial region Ri. Further, the image vectorization execution unit 5 calculates the candidate probability p (i, j) of each candidate vector image from the distance d (i, j).
  • the distance d (i, j) between the image feature values Ki and Fj for example, when Ki and Fj are given as a one-dimensional array, the Euclidean distance between Ki and Fj, and the vector inner product of Ki and Fj There is a method for calculating a normalized mutual function of Ki and Fj.
  • the distance d (i, j) may be calculated by the “Bhatterary distance”.
  • a method for calculating the candidate probability p (i, j) for example, there is a method for calculating as shown in the following expression (2) using the distance d (i, j).
  • p (i, j) a ′ ⁇ exp ⁇ b ′ ⁇ d (i, j) ⁇ d (i, j) ⁇ (2)
  • a ′ is a constant for normalization
  • b ′ is a positive constant determined by the user.
  • the image vectorization execution unit 5 obtains three candidate vector images V1, V2, and V3 shown in FIG. 9 for the partial region Ri shown in FIG.
  • the image vectorization execution unit 5 uses the plurality of candidate vector images and each candidate probability to transform the candidate vector image obtained in step ST14 while maintaining consistency with the peripheral partial region. A vector image of each partial area divided in step ST12 is obtained.
  • the image vectorization execution unit 5 selects one candidate vector image that maximizes the candidate probability of the candidate vector image in each partial region, and assigns it as the initial value of the image vector of each partial region.
  • the image vectorization execution unit 5 calculates a boundary error f (j) between the candidate vector image Vj and the peripheral partial region Sk as in the following expression (3).
  • f (j) (pixel value of W1 ⁇ pixel value of Y1) ⁇ (pixel value of W1 ⁇ pixel value of Y1) + (Pixel value of W2 ⁇ pixel value of Y2) ⁇ (pixel value of W2 ⁇ pixel value of Y2) (3)
  • the pixel value of W1 indicates a pixel value generated in the boundary region W1 of the vector image corresponding to the position of the peripheral partial region S1 (that is, the image feature amount calculated in step ST2).
  • W2, Y1, and Y2 the pixel value of W1 indicates a pixel value generated in the boundary region W1 of the vector image corresponding to the position of the peripheral partial region S1 (that is, the image feature amount calculated in step ST2).
  • the image vectorization execution unit 5 calculates the cost function E (j
  • i) p (i, j) ⁇ q (j) (4)
  • q (j) a ′′ ⁇ exp ⁇ b ′′ ⁇ f (j) ⁇ (5)
  • p (i, j) is the candidate probability of each candidate vector image obtained by the image vectorization execution unit 5 in step ST14 described above.
  • q (j) is a probability representing the continuity of the boundary, and is defined as in the above equation (5).
  • a ′′ is a normalization constant, and b ′′ is a positive constant set by the user.
  • i) is a function of the variable j.
  • j is one of 1, 2, and 3 shown in FIG.
  • the raster image can be converted into an image vector so that the boundary areas of the partial areas are continuous.
  • the image vectorization execution unit 5 selects a candidate vector image that maximizes the cost function E (j
  • FIG. 12 shows an example of a vector image correction method. As shown in FIG. 12A, when there is a slight difference in the boundary region between the line L1 of the target partial region Ri and the line L2 of the peripheral partial region Sk, the image vectorization execution unit 5 As shown in (b), the line L1 of the target partial region Ri may be deformed to achieve consistency with the peripheral partial region Sk.
  • the image vectorization execution unit 5 uses an optimization method based on iteration such as belief propagation, for example.
  • the vector image of each partial region Ri may be obtained by maximizing the cost function E (j
  • the image vectorization execution unit 5 combines the selected vector images with respect to the partial regions Ri of the raster image I acquired in step ST11, and outputs a vector image V.
  • the image vectorization apparatus 1 converts a raster image for a sample into a vector image according to a raster image database 2 that holds a raster image for a sample and a predetermined method, and An image vectorization learning unit 3 that calculates an image feature amount, an image vectorization pattern database 4 that holds a pair of the image feature amount and a vector image, and an image feature amount that matches the image feature amount of the input raster image
  • the image vectorization pattern database 4 is searched, and an image vectorization execution unit 5 that selects a vector image paired with the searched image feature amount as a vector image of the input raster image is provided. Therefore, by creating the image vectorization pattern database 4 in advance, information on how much noise is removed, information on which part of the image is considered as an edge, etc.
  • the raster image can be converted into an image vector without being affected by noise.
  • the image vectorization apparatus 1 can perform the desired image vectorization by changing the information stored in the image vectorization pattern database 4.
  • the image vectorization learning unit 3 intentionally performs an illustration-like image vectorization in which the number of colors of the image is reduced, and stores the pattern in the image vectorization pattern database 4, whereby the image vectorization execution unit 5 can automatically generate an illustration-like vector image.
  • a photorealistic image vectorization pattern such as an actual photograph in the image vectorization pattern database 4
  • a vector image such as an actual photograph can be automatically generated.
  • the image vectorization execution unit 5 calculates a probability indicating the continuity of the boundary between candidate vector images of a partial region of interest and its surrounding partial regions, and pays attention to this probability.
  • the cost function is calculated by multiplying the candidate probabilities of the candidate vector images in the partial area, and the candidate vector image that maximizes the cost function is selected as the optimum vector image for the partial area to be noticed. For this reason, it becomes possible to consider the matching degree of the candidate vector image in the partial area of interest and the consistency with the peripheral partial areas, and the continuity of the boundary areas of the partial areas can be improved.
  • the image vectorization apparatus learns the image vectorization method in advance, so that the image vectorization for image vectorization of an image including noise or an image with blurred edges is performed. Suitable for use in equipment.

Abstract

An image vectorization training unit (3) converts a sample raster image in a raster image database (2) to a vector image in accordance with a given method, and stores the vector image paired with an image feature value into a pattern database for raster to vector conversion (4). An image vectorization execution unit (5) divides an inputted raster image into portions, selects from the pattern database for raster to vector conversion (4) image vectors which are stored paired with the image feature values conforming to image feature values of the respective portions, and combines the image vectors.

Description

画像ベクトル化装置、画像ベクトル化方法及び画像ベクトル化プログラムImage vectorization apparatus, image vectorization method, and image vectorization program
 この発明は、例えば、ラスタ画像をベクトル画像フォーマットに変換する画像ベクトル化装置、画像ベクトル化方法及び画像ベクトル化プログラムに関する。 The present invention relates to, for example, an image vectorization apparatus, an image vectorization method, and an image vectorization program for converting a raster image into a vector image format.
 以下、本発明の「ベクトル画像」とは、画像を点、線及び面で表現し、またその色情報をパラメトリックな方程式で表現した画像のことを指す物とする。ここで、「点」とは2次元座標値、3次元座標値又はより一般的に表現したN次元座標値のことを指す。「線」とは2つの点を結ぶ直線又は曲線のことを指す。また、「面」とは複数の線で囲まれた領域のことを指す。また、「ラスタ画像」とは色のついた点の集合で表現された画像を指すものとする。また、「画像ベクトル化」とは、ラスタ画像をベクトル画像に変換することを指すものとする。 Hereinafter, the “vector image” of the present invention refers to an image in which an image is represented by dots, lines, and surfaces, and color information is represented by a parametric equation. Here, “point” refers to a two-dimensional coordinate value, a three-dimensional coordinate value, or a more generally expressed N-dimensional coordinate value. A “line” refers to a straight line or a curve connecting two points. Further, the “surface” refers to a region surrounded by a plurality of lines. A “raster image” refers to an image represented by a set of colored points. Also, “image vectorization” refers to converting a raster image into a vector image.
 従来、画像編集を容易にしたり、画像を拡大・縮小しても画像中の形がぼけたり崩れたりしないようにすることを目的として、画像ベクトル化の手法が提案されている(例えば、非特許文献1参照)。図13に、非特許文献1の手法による画像ベクトル化の例を示す。図13(a),(b)の各グラフにおいて、横軸に画像座標を示し、縦軸に画素値(色情報)を示す。なお、説明簡略化のために2次元の画像座標を1次元の画像座標で表現している。図13(a)のように、ラスタ画像の画素値にノイズが無い場合は滑らかな双3次曲面Lで各画素値を近似することで、画像ベクトル化を実現する。図13(a)中に破線で囲った部分は、色が大きく変化する箇所であり、例えば物体のエッジ(輪郭)に相当する。 Conventionally, image vectorization methods have been proposed for the purpose of facilitating image editing and preventing the shape in the image from being blurred or distorted even when the image is enlarged or reduced (for example, non-patented). Reference 1). FIG. 13 shows an example of image vectorization by the method of Non-Patent Document 1. In each graph of FIGS. 13A and 13B, the horizontal axis represents image coordinates, and the vertical axis represents pixel values (color information). For simplification of description, the two-dimensional image coordinates are expressed by one-dimensional image coordinates. As shown in FIG. 13A, when there is no noise in the pixel value of the raster image, image vectorization is realized by approximating each pixel value with a smooth bicubic surface L. A portion surrounded by a broken line in FIG. 13A is a portion where the color changes greatly, and corresponds to, for example, an edge (contour) of an object.
 しかしながら、非特許文献1に記載の画像ベクトル化方式では、入力されたラスタ画像を忠実に再現するように画像ベクトル化を実施していたため、ラスタ画像中に存在するノイズまでも忠実に再現してしまうという課題があった。このため、例えば図13(b)に示すように、ラスタ画像にノイズが付与されている場合には、ノイズの画素値Zまでも忠実に再現しようとしてしまうため、ノイズに影響を受けた画素値の近似曲面L’を再現してしまった。 However, in the image vectorization method described in Non-Patent Document 1, image vectorization is performed so as to faithfully reproduce the input raster image. Therefore, even noise existing in the raster image is faithfully reproduced. There was a problem of end. For this reason, for example, as shown in FIG. 13B, when noise is added to the raster image, even the pixel value Z of the noise is tried to be faithfully reproduced. The approximate curved surface L ′ has been reproduced.
 また、非特許文献1に記載の画像ベクトル化方式では、物体のエッジが明確に撮像されていることが画像ベクトル化の前提となっていた。そのため、物体のエッジがぼやけているラスタ画像の場合でも、そのエッジを強調して画像ベクトル化するというような処理は無かった。 Further, in the image vectorization method described in Non-Patent Document 1, it has been a premise for image vectorization that the edge of an object is clearly imaged. Therefore, even in the case of a raster image in which the edge of an object is blurred, there is no processing for emphasizing the edge and converting it to an image vector.
 この発明は、上記のような課題を解決するためになされたもので、ノイズを含むラスタ画像に対しても、ノイズに影響されずに自動で画像ベクトル化を実施する画像ベクトル化装置、画像ベクトル化方法及び画像ベクトル化プログラムを得ることを目的とする。 The present invention has been made to solve the above-described problems. An image vectorization apparatus and an image vector that automatically perform image vectorization on a raster image including noise without being affected by noise. An object is to obtain a conversion method and an image vectorization program.
 この発明の画像ベクトル化装置は、所定の方式に従って、サンプル用のラスタ画像をベクトル画像に変換する画像ベクトル化学習部と、サンプル用のラスタ画像とそのベクトル画像のペアを保持する画像ベクトル化パターンデータベースと、画像ベクトル化パターンデータベースが保持するサンプル用のラスタ画像とそのベクトル画像のペアを用いて、入力されたラスタ画像をベクトル画像に変換する画像ベクトル化実行部とを備えるものである。 An image vectorization apparatus according to the present invention includes an image vectorization learning unit that converts a sample raster image into a vector image according to a predetermined method, and an image vectorization pattern that holds a pair of the sample raster image and the vector image A database and an image vectorization execution unit that converts an input raster image into a vector image using a sample raster image and a pair of the vector images held in the image vectorization pattern database are provided.
 また、この発明の画像ベクトル化方法は、所定の方式に従って、サンプル用のラスタ画像をベクトル画像に変換する画像ベクトル化学習ステップと、サンプル用のラスタ画像とそのベクトル画像のペアを用いて、入力されたラスタ画像をベクトル画像に変換する画像ベクトル化実行ステップとを備えるものである。 Further, the image vectorization method of the present invention uses an image vectorization learning step for converting a sample raster image into a vector image according to a predetermined method, and a sample raster image and a vector image pair. And an image vectorization execution step for converting the raster image into a vector image.
 また、この発明の画像ベクトル化プログラムは、所定の方式に従って、サンプル用のラスタ画像をベクトル画像に変換する画像ベクトル化学習手順と、サンプル用のラスタ画像とそのベクトル画像のペアを用いて、入力されたラスタ画像をベクトル画像に変換する画像ベクトル化実行手順とをコンピュータに実行させるものである。 The image vectorization program according to the present invention uses an image vectorization learning procedure for converting a sample raster image into a vector image according to a predetermined method, and uses a sample raster image and a vector image pair. An image vectorization execution procedure for converting the raster image into a vector image is executed by a computer.
 この発明によれば、サンプル用のラスタ画像を用いて画像ベクトル化の方策を予め学習しておくことにより、ノイズを含むラスタ画像に対しても、ノイズに影響されずに自動で画像ベクトル化を実施することができる。 According to the present invention, by learning the image vectorization method in advance using the raster image for the sample, the image vectorization is automatically performed on the raster image including the noise without being affected by the noise. Can be implemented.
この発明の実施の形態1に係る画像ベクトル化装置の構成を示すブロック図である。It is a block diagram which shows the structure of the image vectorization apparatus which concerns on Embodiment 1 of this invention. 実施の形態1に係る画像ベクトル化装置の、画像ベクトル化学習部の動作を示すフローチャートである。6 is a flowchart illustrating an operation of an image vectorization learning unit of the image vectorization apparatus according to Embodiment 1; 画像ベクトル化学習部の変換処理を示し、図3(a)が変換前のラスタ画像、図3(b)が変換後のベクトル画像である。FIG. 3A shows a conversion process of the image vectorization learning unit, FIG. 3A shows a raster image before conversion, and FIG. 3B shows a vector image after conversion. 画像に定義した局所座標系の一例を示す図である。It is a figure which shows an example of the local coordinate system defined to the image. 画像ベクトル化学習部による、ノイズが付加されたラスタ画像を画像ベクトル化する例を示す図である。It is a figure which shows the example which image-converts the raster image to which noise was added by the image vectorization learning part. 画像ベクトル化パターンデータベースの構成を示すブロック図である。It is a block diagram which shows the structure of an image vectorization pattern database. 実施の形態1に係る画像ベクトル化装置の、画像ベクトル化実行部の動作を示すフローチャートである。4 is a flowchart illustrating an operation of an image vectorization execution unit of the image vectorization apparatus according to Embodiment 1; 画像ベクトル化実行部による画像特徴量の算出例を示す図である。It is a figure which shows the example of calculation of the image feature-value by an image vectorization execution part. 画像ベクトル化実行部により選択された候補ベクトル画像とその候補確率を示す図である。It is a figure which shows the candidate vector image selected by the image vectorization execution part, and its candidate probability. 画像ベクトル化実行部によるベクトル画像の選択方法を示す図である。It is a figure which shows the selection method of the vector image by an image vectorization execution part. 画像ベクトル化実行部による注目部分領域と周辺部分領域の境界の整合性の算出方法の例を示す図である。It is a figure which shows the example of the calculation method of the consistency of the boundary of an attention partial area | region and a peripheral partial area | region by the image vectorization execution part. 画像ベクトル化実行部による候補ベクトル画像の変形処理の例を示す図である。It is a figure which shows the example of a deformation | transformation process of the candidate vector image by an image vectorization execution part. 非特許文献1による画像ベクトル化方式を説明する図であり、図12(a)はノイズのないラスタ画像の場合、図12(b)はノイズを含むラスタ画像の場合を示す。FIGS. 12A and 12B illustrate an image vectorization method according to Non-Patent Document 1. FIG. 12A shows a case of a raster image without noise, and FIG. 12B shows a case of a raster image containing noise.
 以下、この発明をより詳細に説明するために、この発明を実施するための形態について、添付の図面に従って説明する。
実施の形態1.
 図1に示す画像ベクトル化装置1は、画像ベクトル化の方策を学習するためのサンプルとなる複数のラスタ画像を保存するラスタ画像データベース2と、サンプルのラスタ画像をベクトル画像に変換する画像ベクトル化学習部3と、サンプルのラスタ画像とベクトル画像のペアを保存する画像ベクトル化パターンデータベース4と、画像ベクトル化パターンデータベース4を利用して入力されたラスタ画像をベクトル画像に変換する画像ベクトル化実行部5から構成される。
Hereinafter, in order to explain the present invention in more detail, modes for carrying out the present invention will be described with reference to the accompanying drawings.
Embodiment 1 FIG.
An image vectorization apparatus 1 shown in FIG. 1 includes a raster image database 2 that stores a plurality of raster images as samples for learning a strategy for image vectorization, and an image vectorization that converts sample raster images into vector images. Learning unit 3, image vectorization pattern database 4 for storing a pair of sample raster image and vector image, and image vectorization execution for converting a raster image input using image vectorization pattern database 4 into a vector image It consists of part 5.
 図1の例では、画像ベクトル化装置1の構成要素であるラスタ画像データベース2、画像ベクトル化学習部3、画像ベクトル化パターンデータベース4、画像ベクトル化実行部5のそれぞれが専用のハードウェア(例えば、MPU(Micro Processing Unit)を実装している半導体集積回路基板)で構成されているものを想定する。ただし、画像ベクトル化装置1をコンピュータで構成してもよく、その場合、ラスタ画像データベース2、画像ベクトル化学習部3、画像ベクトル化パターンデータベース4、画像ベクトル化実行部5の処理内容を記述している画像ベクトル化プログラムをコンピュータのメモリに格納し、コンピュータのCPU(Central Processing Unit)が当該メモリに格納されている画像ベクトル化プログラムを実行する。 In the example of FIG. 1, each of the raster image database 2, the image vectorization learning unit 3, the image vectorization pattern database 4, and the image vectorization execution unit 5, which are components of the image vectorization device 1, , A semiconductor integrated circuit board on which an MPU (Micro Processing Unit) is mounted is assumed. However, the image vectorization apparatus 1 may be configured by a computer. In this case, the processing contents of the raster image database 2, the image vectorization learning unit 3, the image vectorization pattern database 4, and the image vectorization execution unit 5 are described. The image vectorization program stored in the memory is stored in the memory of the computer, and the CPU (Central Processing Unit) of the computer executes the image vectorization program stored in the memory.
 次に、画像ベクトル化装置1の動作を説明する。先ず、図2のフローチャートを用いて画像ベクトル化装置1の前半処理を説明する。
 ステップST1において、画像ベクトル化学習部3は、ラスタ画像データベース2に保存された複数のラスタ画像を取得し、取得したラスタ画像をベクトル画像に変換する。
Next, the operation of the image vectorization apparatus 1 will be described. First, the first half process of the image vectorization apparatus 1 will be described with reference to the flowchart of FIG.
In step ST1, the image vectorization learning unit 3 acquires a plurality of raster images stored in the raster image database 2, and converts the acquired raster images into vector images.
 図3は、画像ベクトル化学習部3の変換処理を説明する図であり、変換前のラスタ画像Iを図3(a)に示し、変換後の画像ベクトルVを図3(b)に示す。画像ベクトル化学習部3は、ラスタ画像データベース2からN×Mピクセルの大きさをもつラスタ画像Iを取得して、既存の画像処理機能等を利用して操作者の指示によりベクトル画像Vに変換することにより、画像ベクトル化の方策を予め学習する。
 例えば図3のラスタ画像Iはぼやけているため、エッジ位置には曖昧さが存在する。そこで、例えば操作者の判断により、ラスタ画像Iを点P1,P2,P3,P4から成る面A1と、点P2,P5,P3,P6から成る面A2とで構成されるベクトル画像Vに変換する。この変換処理により、ラスタ画像Iのエッジ位置は点P2,P3から構成される曲線であるという情報が、ベクトル画像Vに保持されることになり、エッジ判定のレベルを予め学習できる。
FIG. 3 is a diagram for explaining the conversion process of the image vectorization learning unit 3. FIG. 3A shows the raster image I before conversion, and FIG. 3B shows the image vector V after conversion. The image vectorization learning unit 3 acquires a raster image I having a size of N × M pixels from the raster image database 2 and converts it into a vector image V using an existing image processing function or the like according to an operator's instruction. By doing so, the image vectorization strategy is learned in advance.
For example, since the raster image I in FIG. 3 is blurred, there is ambiguity in the edge position. Therefore, for example, at the discretion of the operator, the raster image I is converted into a vector image V composed of a plane A1 composed of points P1, P2, P3, and P4 and a plane A2 composed of points P2, P5, P3, and P6. . By this conversion processing, information that the edge position of the raster image I is a curve composed of the points P2 and P3 is held in the vector image V, and the level of edge determination can be learned in advance.
 ここで、図3に示す面A1の色情報及び輝度情報の表現方法の例を説明する。図4は、画像に定義した局所座標系の一例を示す図であり、点P1を原点P0(0,0)とした面A1の局所座標系(U,V)を示す。図において、uとvは0から1の間をとる実数パラメータとし、点P2の座標は(u=1,v=0)、点P3の座標は(u=1,v=1)、点P4の座標は(u=0,v=1)と定義する。このとき、面A1内の任意の点P(u,v)における色、輝度等の情報は、任意のパラメトリックな関数F(u,v)により表現することができる。 Here, an example of a method for expressing the color information and luminance information of the surface A1 shown in FIG. 3 will be described. FIG. 4 is a diagram showing an example of the local coordinate system defined in the image, and shows the local coordinate system (U, V) of the plane A1 with the point P1 as the origin P0 (0, 0). In the figure, u and v are real parameters taking values between 0 and 1, the coordinates of point P2 are (u = 1, v = 0), the coordinates of point P3 are (u = 1, v = 1), and point P4. Is defined as (u = 0, v = 1). At this time, information such as color and luminance at an arbitrary point P (u, v) in the plane A1 can be expressed by an arbitrary parametric function F (u, v).
 ここで、関数F(u,v)としては、例えば先立って説明した非特許文献1に記載の「Ferguson patch」を利用してもよいし、急峻なエッジを表現可能な下式(1)のシグモイド関数により表現してもよい。
  F(u,v)=1/{1+exp(a×u+b×v+c)}   (1)
 ここで、a,b,cは定数であり、ユーザが任意に指定してよい。
Here, as the function F (u, v), for example, “Ferguson patch” described in Non-Patent Document 1 described above may be used, or the following expression (1) that can express a steep edge is used. It may be expressed by a sigmoid function.
F (u, v) = 1 / {1 + exp (a * u + b * v + c)} (1)
Here, a, b, and c are constants and may be arbitrarily specified by the user.
 また、画像ベクトル化学習部3は、操作者の判断等により、入力されたラスタ画像のノイズを除去して、ベクトル画像を生成してもよい。これにより、ノイズレベルを予め学習できる。
 図5に、ノイズが付加されたラスタ画像を画像ベクトル化する例を示す。図5において、横軸に画像座標を示し、縦軸に画素値(輝度情報)を示す。なお、説明簡略化のために2次元の画像座標を1次元の画像座標で表現している。画像内にノイズが加算された輝度値Zがある場合に、操作者等がこの輝度値Zをノイズだと判断して無視するよう指示し、画像ベクトル化学習部3はノイズと判断された輝度値Z以外の各輝度値を滑らかにつなぐ近似曲面Lを算出する。
Further, the image vectorization learning unit 3 may generate a vector image by removing noise from the input raster image based on an operator's judgment or the like. Thereby, the noise level can be learned in advance.
FIG. 5 shows an example in which a raster image to which noise is added is converted into an image vector. In FIG. 5, the horizontal axis represents image coordinates, and the vertical axis represents pixel values (luminance information). For simplification of description, the two-dimensional image coordinates are expressed by one-dimensional image coordinates. When there is a luminance value Z to which noise is added in the image, the operator or the like determines that the luminance value Z is noise and ignores it, and the image vectorization learning unit 3 determines that the luminance is determined to be noise. An approximate curved surface L that smoothly connects luminance values other than the value Z is calculated.
 続くステップST2において、画像ベクトル化学習部3は、ラスタ画像データベース2から取得したサンプル用のラスタ画像から画像特徴量を算出する。
 画像特徴量としては、ラスタ画像に含まれる輝度値、色のヒストグラム、エッジのヒストグラム、色とエッジの相関関数等、任意とする。また、HOG(Histogram of Oriented Gradients)、SIFT(Scale Invariant Feature Transform)等のラスタ画像の局所特徴量を算出するようにしてもよい。
In subsequent step ST <b> 2, the image vectorization learning unit 3 calculates an image feature amount from the sample raster image acquired from the raster image database 2.
The image feature amount is arbitrary, such as a luminance value included in the raster image, a color histogram, an edge histogram, a color-edge correlation function, or the like. Further, local feature amounts of raster images such as HOG (Histogram of Oriented Gradients) and SIFT (Scale Invariant Feature Transform) may be calculated.
 続くステップST3において、画像ベクトル化学習部3は、サンプル用のラスタ画像から抽出した画像特徴量と、そのラスタ画像を画像ベクトル化したベクトル画像とをペアにして、画像ベクトル化パターンデータベース4に記録させる。
 図6は、画像ベクトル化パターンデータベース4の構成を示すブロック図である。画像ベクトル化学習部3は、例えばラスタ画像Ij(j=1,・・・,N)をラスタ画像データベース2から取得して、このラスタ画像Ijについて画像特徴量Fjを算出し、かつ、ベクトル画像Vjを生成し、これら画像特徴量Fj及びベクトル画像Vjのペアを画像ベクトル化パターンとして画像ベクトル化パターンデータベース4に保存させる。
In the subsequent step ST3, the image vectorization learning unit 3 records in the image vectorization pattern database 4 a pair of the image feature amount extracted from the sample raster image and the vector image obtained by converting the raster image into an image vector. Let
FIG. 6 is a block diagram showing the configuration of the image vectorization pattern database 4. The image vectorization learning unit 3 obtains, for example, a raster image Ij (j = 1,..., N) from the raster image database 2, calculates an image feature amount Fj for the raster image Ij, and the vector image Vj is generated, and the pair of the image feature amount Fj and the vector image Vj is stored in the image vectorization pattern database 4 as an image vectorization pattern.
 以上のステップST1~ST3において、画像ベクトル化学習部3が、ラスタ画像データベース2のサンプルのラスタ画像を画像ベクトル化して画像ベクトル化パターンデータベース4に保存させることで、ノイズ及びエッジの判定を含む、画像ベクトル化の方策の情報を学習する。 In the above steps ST1 to ST3, the image vectorization learning unit 3 converts the sample raster image of the raster image database 2 into an image vector and stores it in the image vectorization pattern database 4, thereby including noise and edge determination. Learn information on image vectorization strategies.
 次に、図7のフローチャートを用いて画像ベクトル化装置1の後半処理を説明する。
 先ず、ステップST11において、画像ベクトル化実行部5は、入力としてラスタ画像を取得する。ここで、取得するラスタ画像は任意のサイズでよい。
Next, the latter half process of the image vectorization apparatus 1 is demonstrated using the flowchart of FIG.
First, in step ST11, the image vectorization execution unit 5 acquires a raster image as an input. Here, the raster image to be acquired may have an arbitrary size.
 続くステップST12において、画像ベクトル化実行部5は、取得したラスタ画像を部分領域に分割する。図8(a)に画像ベクトル化実行部5が取得したラスタ画像Iの一例を示し、図8(b)に部分領域の分割例を示す。図8の例では、画像ベクトル化実行部5が、入力されたラスタ画像Iを細かい長方形の領域に等分割する。なお、部分領域の大きさ及び形状は任意とし、部分領域から画像特徴量を取得できる程度の大きさであればよい。 In subsequent step ST12, the image vectorization execution unit 5 divides the acquired raster image into partial regions. FIG. 8A shows an example of a raster image I acquired by the image vectorization execution unit 5, and FIG. 8B shows an example of partial area division. In the example of FIG. 8, the image vectorization execution unit 5 equally divides the input raster image I into fine rectangular areas. Note that the size and shape of the partial area are arbitrary, and may be any size as long as the image feature amount can be acquired from the partial area.
 続くステップST13において、画像ベクトル化実行部5は、ラスタ画像の各部分領域から、各々の画像特徴量を算出する。画像ベクトル化実行部5は、例えば図8(c)に示すように、i番目の部分領域Riから画像特徴量Kiを算出する。ここで、画像特徴量Kiとしては、前述のステップST2で算出したサンプルの画像特徴量Fjと同様の情報を算出するものとする。
 画像ベクトル化実行部5はステップST13において、ラスタ画像Iの全ての部分領域に対して同様の処理を繰り返す。
In subsequent step ST13, the image vectorization execution unit 5 calculates each image feature amount from each partial region of the raster image. For example, as shown in FIG. 8C, the image vectorization execution unit 5 calculates an image feature amount Ki from the i-th partial region Ri. Here, as the image feature amount Ki, the same information as the sample image feature amount Fj calculated in step ST2 described above is calculated.
In step ST13, the image vectorization execution unit 5 repeats the same processing for all the partial areas of the raster image I.
 続くステップST14において、画像ベクトル化実行部5は、各部分領域の画像特徴量を用いて画像ベクトル化パターンデータベース4を探索し、各部分領域に適合した複数のサンプルのベクトル画像を候補ベクトル画像として抽出し、さらに各候補ベクトル画像の候補確率を算出する。 In subsequent step ST14, the image vectorization execution unit 5 searches the image vectorization pattern database 4 using the image feature amount of each partial region, and uses a plurality of sample vector images suitable for each partial region as candidate vector images. Extraction is performed, and the candidate probability of each candidate vector image is calculated.
 ここで、図8及び図9を用いて、ステップST14の処理を説明する。
 画像ベクトル化実行部5は、図8に示すi番目の部分領域Riから算出した画像特徴量Kiを用いて、画像ベクトル化パターンデータベース4に保存されたN個の画像特徴量Fj(j=1,・・・,N)と画像特徴量Kiとの距離d(i,j)を計算し、距離d(i,j)が一定値よりも小さくなるような画像特徴量Fjとペアになったベクトル画像Vjを画像ベクトル化パターンデータベース4から取得して、部分領域Riの候補ベクトル画像にする。また、画像ベクトル化実行部5は、この距離d(i,j)から各々の候補ベクトル画像の候補確率p(i,j)を算出する。
Here, the process of step ST14 is demonstrated using FIG.8 and FIG.9.
The image vectorization execution unit 5 uses the image feature amount Ki calculated from the i-th partial region Ri shown in FIG. 8 to store N image feature amounts Fj (j = 1) stored in the image vectorization pattern database 4. ,..., N) and the image feature amount Ki, the distance d (i, j) is calculated, and a pair is formed with the image feature amount Fj such that the distance d (i, j) is smaller than a certain value. The vector image Vj is acquired from the image vectorization pattern database 4 and is set as a candidate vector image of the partial region Ri. Further, the image vectorization execution unit 5 calculates the candidate probability p (i, j) of each candidate vector image from the distance d (i, j).
 画像特徴量Ki,Fj間の距離d(i,j)の算出方法としては、例えばKi,Fjが1次元の配列で与えられている場合、KiとFjのユークリッド距離、KiとFjのベクトル内積、KiとFjの正規化相互関数等を算出する方法がある。あるいは、画像特徴量Ki,Fjが色のヒストグラムの場合には、「Bhattacharya距離」により距離d(i,j)を算出してもよい。 As a method of calculating the distance d (i, j) between the image feature values Ki and Fj, for example, when Ki and Fj are given as a one-dimensional array, the Euclidean distance between Ki and Fj, and the vector inner product of Ki and Fj There is a method for calculating a normalized mutual function of Ki and Fj. Alternatively, when the image feature values Ki and Fj are color histograms, the distance d (i, j) may be calculated by the “Bhatterary distance”.
 また、候補確率p(i,j)の算出方法としては、例えば距離d(i,j)を用いて、下式(2)のように算出する方法がある。
  p(i,j)=a’×exp{-b’×d(i,j)×d(i,j)}  (2)
 ここで、a’は規格化のための定数、b’はユーザが決定する正の定数である。候補確率p(i,j)が大きいほど、部分領域Riに対して画像ベクトルVjが相応しいことを示す。画像ベクトル化実行部5は、図8(c)に示す部分領域Riに対して、図9に示す3つの候補ベクトル画像V1,V2,V3を得ると共に、各々の候補確率も得る。
Further, as a method for calculating the candidate probability p (i, j), for example, there is a method for calculating as shown in the following expression (2) using the distance d (i, j).
p (i, j) = a ′ × exp {−b ′ × d (i, j) × d (i, j)} (2)
Here, a ′ is a constant for normalization, and b ′ is a positive constant determined by the user. The larger the candidate probability p (i, j), the more suitable the image vector Vj is for the partial region Ri. The image vectorization execution unit 5 obtains three candidate vector images V1, V2, and V3 shown in FIG. 9 for the partial region Ri shown in FIG.
 続くステップST15において、画像ベクトル化実行部5は、複数の候補ベクトル画像と各候補確率を用いて、周辺部分領域との整合性を取りながら、ステップST14で求めた候補ベクトル画像を変形して、ステップST12で分割した各部分領域のベクトル画像を求める。 In subsequent step ST15, the image vectorization execution unit 5 uses the plurality of candidate vector images and each candidate probability to transform the candidate vector image obtained in step ST14 while maintaining consistency with the peripheral partial region. A vector image of each partial area divided in step ST12 is obtained.
 ここで、図10~図12を用いて、ステップST15の処理を説明する。
 先ず、画像ベクトル化実行部5は、各部分領域において、候補ベクトル画像の候補確率が最大となるような候補ベクトル画像を1つ選択して、各部分領域の画像ベクトルの初期値として割り当てておく。図10(a)の例では、注目する部分領域Riの周辺部分領域である部分領域Sk(k=1,2)におけるベクトル画像が既に得られていると仮定する。また、この図10の注目部分領域Riには、図9に示す3つの候補ベクトル画像Vj(j=1,2,3)が得られているものとする。
Here, the process of step ST15 will be described with reference to FIGS.
First, the image vectorization execution unit 5 selects one candidate vector image that maximizes the candidate probability of the candidate vector image in each partial region, and assigns it as the initial value of the image vector of each partial region. . In the example of FIG. 10A, it is assumed that a vector image has already been obtained in the partial area Sk (k = 1, 2), which is a peripheral partial area of the partial area Ri of interest. Further, it is assumed that three candidate vector images Vj (j = 1, 2, 3) shown in FIG. 9 are obtained in the target partial area Ri of FIG.
 この場合、画像ベクトル化実行部5は、例えば次のように周辺部分領域Sk(k=1,2)との整合性を取りながら、注目部分領域Riに最も適したベクトル画像を候補ベクトル画像Vjの中から選択する。 In this case, the image vectorization execution unit 5 selects a vector image most suitable for the target partial region Ri while maintaining consistency with the peripheral partial region Sk (k = 1, 2) as follows, for example. Choose from.
 先ず、図11を用いて、候補ベクトル画像Vjと周辺部分領域Skとの整合性の算出方法の例を説明する。
 図11に示すように、注目部分領域Riの候補ベクトル画像Vjの上側の境界領域をY1、左側の境界領域をY2、周辺部分領域S1の下側の境界領域をW1、周辺部分領域S2の右側の境界領域をW2とする。このとき、画像ベクトル化実行部5は、下式(3)のように候補ベクトル画像Vjと周辺部分領域Skとの境界誤差f(j)を計算する。
  f(j)=(W1の画素値-Y1の画素値)×(W1の画素値-Y1の画素値)
      +(W2の画素値-Y2の画素値)×(W2の画素値-Y2の画素値)
                                  (3)
 ここで、例えばW1の画素値とは、周辺部分領域S1の位置に相当するベクトル画像の境界領域W1において生成される画素値(即ち、ステップST2で算出する画像特徴量)のことを指す。W2,Y1,Y2についても同様である。
First, an example of a method for calculating consistency between the candidate vector image Vj and the peripheral partial region Sk will be described with reference to FIG.
As shown in FIG. 11, the upper boundary region of the candidate vector image Vj of the target partial region Ri is Y1, the left boundary region is Y2, the lower boundary region of the peripheral partial region S1 is W1, and the right side of the peripheral partial region S2 Let W2 be the boundary region. At this time, the image vectorization execution unit 5 calculates a boundary error f (j) between the candidate vector image Vj and the peripheral partial region Sk as in the following expression (3).
f (j) = (pixel value of W1−pixel value of Y1) × (pixel value of W1−pixel value of Y1)
+ (Pixel value of W2−pixel value of Y2) × (pixel value of W2−pixel value of Y2)
(3)
Here, for example, the pixel value of W1 indicates a pixel value generated in the boundary region W1 of the vector image corresponding to the position of the peripheral partial region S1 (that is, the image feature amount calculated in step ST2). The same applies to W2, Y1, and Y2.
 上式(3)において、境界誤差f(j)が小さな値になる場合、候補ベクトル画像Vjと、周辺部分領域Skに初期値として設定された候補ベクトル画像との画素値の差が小さいことを示している。従って、注目部分領域Riの候補ベクトル画像Vjと周辺部分領域Skの候補ベクトル画像との連続性が大きく、整合性が取れているといえる。他方、境界誤差f(j)が大きな値になる場合、注目部分領域Riの候補ベクトル画像Vjと周辺部分領域Skの候補ベクトル画像とが不連続になっている可能性が大きく、整合性が取れていないといえる。 In the above equation (3), when the boundary error f (j) becomes a small value, the difference between the pixel values of the candidate vector image Vj and the candidate vector image set as the initial value in the peripheral partial region Sk is small. Show. Accordingly, it can be said that the continuity between the candidate vector image Vj of the target partial region Ri and the candidate vector images of the peripheral partial region Sk is large and consistent. On the other hand, when the boundary error f (j) has a large value, there is a high possibility that the candidate vector image Vj of the target partial region Ri and the candidate vector images of the peripheral partial region Sk are discontinuous, and the consistency is secured. It can be said that it is not.
 続いて、画像ベクトル化実行部5は、注目部分領域Riの全ての候補ベクトル画像Vjに対して境界誤差f(j)を求めた後、下式(4)に定めるコスト関数E(j|i)を最大化するような変数j(候補ベクトル画像のインデックス)を求め、その変数jの候補ベクトル画像Vjを注目部分領域Riのベクトル画像として採用する。
  E(j|i)=p(i,j)×q(j)        (4)
  q(j)=a’’×exp{-b’’×f(j)}   (5)
 ここで、p(i,j)は上述のステップST14で画像ベクトル化実行部5が求めた各々の候補ベクトル画像の候補確率である。また、q(j)は境界の連続性を表現した確率であり、上式(5)のように定める。a’’は規格化定数、b’’はユーザが設定する正の定数である。また、コスト関数E(j|i)は、変数jの関数であり、この例ではjは図9に示す1,2,3のいずれかである。
Subsequently, after obtaining the boundary error f (j) for all candidate vector images Vj in the target partial region Ri, the image vectorization execution unit 5 then calculates the cost function E (j | i defined in the following equation (4). ) Is maximized, and the candidate vector image Vj of the variable j is adopted as the vector image of the target partial region Ri.
E (j | i) = p (i, j) × q (j) (4)
q (j) = a ″ × exp {−b ″ × f (j)} (5)
Here, p (i, j) is the candidate probability of each candidate vector image obtained by the image vectorization execution unit 5 in step ST14 described above. Further, q (j) is a probability representing the continuity of the boundary, and is defined as in the above equation (5). a ″ is a normalization constant, and b ″ is a positive constant set by the user. The cost function E (j | i) is a function of the variable j. In this example, j is one of 1, 2, and 3 shown in FIG.
 コスト関数E(j|i)を最大化して、注目部分領域Riのベクトル画像を選択することには次のような利点がある。
 仮に単純に、注目部分領域Riに最も相応しい候補ベクトル画像を求めたい場合には、注目部分領域Riの候補ベクトル画像の候補確率p(i,j)が最大になるような候補画像ベクトルVjを求めればよい。しかし、この方法では、注目部分領域Riの周辺部分領域Skとの境界が不連続になり、周辺部分領域Skとの整合性を表現した確率q(j)が小さな値をとるため、結果としてコスト関数E(j|i)は小さな値をとる。
 即ち、コスト関数E(j|i)を最大化することで、注目部分領域Riにおける候補ベクトル画像Vjの適合度(候補確率p(i,j)で表す)を考慮し、かつ、周辺部分領域Skとの整合性(確率q(j)で表す)を考慮しながら、注目部分領域Riに対応するベクトル画像を求めることができる。よって、各部分領域の境界領域が連続するように、ラスタ画像を画像ベクトル化することができるという利点がある。
There are the following advantages in maximizing the cost function E (j | i) and selecting the vector image of the target partial region Ri.
If the candidate vector image most suitable for the target partial area Ri is simply to be determined, a candidate image vector Vj that maximizes the candidate probability p (i, j) of the candidate vector image of the target partial area Ri is determined. That's fine. However, in this method, the boundary between the target partial region Ri and the peripheral partial region Sk becomes discontinuous, and the probability q (j) expressing the consistency with the peripheral partial region Sk takes a small value, resulting in a cost. The function E (j | i) takes a small value.
That is, by maximizing the cost function E (j | i), the degree of matching of the candidate vector image Vj (represented by the candidate probability p (i, j)) in the target partial region Ri is considered, and the peripheral partial region A vector image corresponding to the target partial region Ri can be obtained while considering consistency with Sk (represented by probability q (j)). Therefore, there is an advantage that the raster image can be converted into an image vector so that the boundary areas of the partial areas are continuous.
 また、画像ベクトル化実行部5は、コスト関数E(j|i)を最大化するような候補ベクトル画像を選択した後、そのベクトル画像を補正し、周辺部分領域Skとの整合性をより正確にしてもよい。図12にベクトル画像の補正方法の例を示す。図12(a)に示すように、注目部分領域Riの線L1とその周辺部分領域Skの線L2とで、境界領域においてわずかな差異があった場合、画像ベクトル化実行部5は、図12(b)に示すように注目部分領域Riの線L1を変形して、周辺部分領域Skとの整合性をとってもよい。 Further, the image vectorization execution unit 5 selects a candidate vector image that maximizes the cost function E (j | i), and then corrects the vector image to make the consistency with the peripheral partial region Sk more accurate. It may be. FIG. 12 shows an example of a vector image correction method. As shown in FIG. 12A, when there is a slight difference in the boundary region between the line L1 of the target partial region Ri and the line L2 of the peripheral partial region Sk, the image vectorization execution unit 5 As shown in (b), the line L1 of the target partial region Ri may be deformed to achieve consistency with the peripheral partial region Sk.
 画像ベクトル化実行部5はステップST15において、ラスタ画像Iの全ての部分領域Ri(j=1,・・・,N)に対して同様の処理を繰り返し、各部分領域Riに適切なベクトル画像を求める。
 全ての部分領域Riに関してコスト関数E(j|i)の最大化を順次実施するために、画像ベクトル化実行部5が例えば、ビリーフプロパゲーション等のイタレーションによる最適化手法で、全ての部分領域Riに対して満遍なくコスト関数E(j|i)を最大化するようにして、各部分領域Riのベクトル画像を求めてもよい。
In step ST15, the image vectorization execution unit 5 repeats the same processing for all the partial areas Ri (j = 1,..., N) of the raster image I, and obtains an appropriate vector image for each partial area Ri. Ask.
In order to sequentially perform the maximization of the cost function E (j | i) for all the partial areas Ri, the image vectorization execution unit 5 uses an optimization method based on iteration such as belief propagation, for example. The vector image of each partial region Ri may be obtained by maximizing the cost function E (j | i) uniformly with respect to Ri.
 続くステップST16において、画像ベクトル化実行部5は、上記ステップST11で取得したラスタ画像Iの各部分領域Riに対して選択した各ベクトル画像を結合して、ベクトル画像Vを出力する。 In subsequent step ST16, the image vectorization execution unit 5 combines the selected vector images with respect to the partial regions Ri of the raster image I acquired in step ST11, and outputs a vector image V.
 以上より、実施の形態1によれば、画像ベクトル化装置1は、サンプル用のラスタ画像を保持するラスタ画像データベース2と、所定の方式に従って、サンプル用のラスタ画像をベクトル画像に変換すると共にその画像特徴量を算出する画像ベクトル化学習部3と、その画像特徴量とベクトル画像のペアを保持する画像ベクトル化パターンデータベース4と、入力されたラスタ画像の画像特徴量に適合する画像特徴量を画像ベクトル化パターンデータベース4から探索し、探索した画像特徴量とペアになったベクトル画像を入力されたラスタ画像のベクトル画像として選択する画像ベクトル化実行部5とを備える構成にした。このため、事前に画像ベクトル化パターンデータベース4を作成することで、どの程度のノイズを除去しておくのかという情報、画像のどの部分をエッジとみなすのかという情報等、画像ベクトル化の方策をパターン化して保持することができるようになり、よって、ノイズに影響されずに、ラスタ画像を画像ベクトル化することが可能となる。 As described above, according to the first embodiment, the image vectorization apparatus 1 converts a raster image for a sample into a vector image according to a raster image database 2 that holds a raster image for a sample and a predetermined method, and An image vectorization learning unit 3 that calculates an image feature amount, an image vectorization pattern database 4 that holds a pair of the image feature amount and a vector image, and an image feature amount that matches the image feature amount of the input raster image The image vectorization pattern database 4 is searched, and an image vectorization execution unit 5 that selects a vector image paired with the searched image feature amount as a vector image of the input raster image is provided. Therefore, by creating the image vectorization pattern database 4 in advance, information on how much noise is removed, information on which part of the image is considered as an edge, etc. The raster image can be converted into an image vector without being affected by noise.
 なお、画像ベクトル化装置1は、画像ベクトル化パターンデータベース4に保持させる情報を変更することで、目的になった画像ベクトル化を実施することができる。例えば、画像ベクトル化学習部3が意図的に画像の色数を少なくしたイラスト風の画像ベクトル化を行ってそのパターンを画像ベクトル化パターンデータベース4に保持させておくことにより、画像ベクトル化実行部5はイラスト風のベクトル画像を自動生成することができる。また、実際の写真のようなフォトリアリスティックな画像ベクトル化のパターンを画像ベクトル化パターンデータベース4に保持させておくことにより、実際の写真のようなベクトル画像を自動生成することもできる。 Note that the image vectorization apparatus 1 can perform the desired image vectorization by changing the information stored in the image vectorization pattern database 4. For example, the image vectorization learning unit 3 intentionally performs an illustration-like image vectorization in which the number of colors of the image is reduced, and stores the pattern in the image vectorization pattern database 4, whereby the image vectorization execution unit 5 can automatically generate an illustration-like vector image. In addition, by storing a photorealistic image vectorization pattern such as an actual photograph in the image vectorization pattern database 4, a vector image such as an actual photograph can be automatically generated.
 また、実施の形態1によれば、画像ベクトル化実行部5が、注目する部分領域とその周辺の部分領域の候補ベクトル画像間の境界の連続性を示す確率を算出し、この確率に注目する部分領域の候補ベクトル画像の候補確率を掛け合わせてコスト関数を算出して、このコスト関数を最大化する候補ベクトル画像を注目する部分領域に対する最適なベクトル画像として選択するように構成した。このため、注目する部分領域における候補ベクトル画像の適合度と、周辺の部分領域との整合性とを考慮することができるようになり、各部分領域の境界領域の連続性を高めることができる。 Further, according to the first embodiment, the image vectorization execution unit 5 calculates a probability indicating the continuity of the boundary between candidate vector images of a partial region of interest and its surrounding partial regions, and pays attention to this probability. The cost function is calculated by multiplying the candidate probabilities of the candidate vector images in the partial area, and the candidate vector image that maximizes the cost function is selected as the optimum vector image for the partial area to be noticed. For this reason, it becomes possible to consider the matching degree of the candidate vector image in the partial area of interest and the consistency with the peripheral partial areas, and the continuity of the boundary areas of the partial areas can be improved.
 以上のように、この発明に係る画像ベクトル化装置は、画像ベクトル化の方策を予め学習するようにしたので、ノイズを含む画像又はエッジがぼやけている画像を画像ベクトル化するための画像ベクトル化装置に用いるのに適している。 As described above, the image vectorization apparatus according to the present invention learns the image vectorization method in advance, so that the image vectorization for image vectorization of an image including noise or an image with blurred edges is performed. Suitable for use in equipment.

Claims (7)

  1.  所定の方式に従って、サンプル用のラスタ画像をベクトル画像に変換する画像ベクトル化学習部と、
     前記サンプル用のラスタ画像とそのベクトル画像のペアを保持する画像ベクトル化パターンデータベースと、
     前記画像ベクトル化パターンデータベースが保持するサンプル用のラスタ画像とそのベクトル画像のペアを用いて、入力されたラスタ画像をベクトル画像に変換する画像ベクトル化実行部とを備える画像ベクトル化装置。
    An image vectorization learning unit that converts a raster image for a sample into a vector image according to a predetermined method;
    An image vectorization pattern database holding a pair of raster images for the sample and vector images thereof;
    An image vectorization apparatus comprising: a raster image for a sample held in the image vectorization pattern database; and an image vectorization execution unit that converts an input raster image into a vector image using a pair of the vector images.
  2.  画像ベクトル化学習部は、サンプル用のラスタ画像から画像特徴量を算出し、
     画像ベクトル化パターンデータベースは、前記ラスタ画像の画像特徴量とそのベクトル画像のペアを保持することを特徴とする請求項1記載の画像ベクトル化装置。
    The image vectorization learning unit calculates an image feature amount from the raster image for the sample,
    The image vectorization apparatus according to claim 1, wherein the image vectorization pattern database holds an image feature amount of the raster image and a pair of the vector image.
  3.  画像ベクトル化実行部は、入力されたラスタ画像を部分領域に分割して画像特徴量を算出し、画像ベクトル化パターンデータベースが保持するサンプル用のラスタ画像の画像特徴量と比較して、適合する画像特徴量とペアになったベクトル画像を当該部分領域の候補ベクトル画像として選択すると共に、選択した前記候補ベクトル画像の適合度を示す候補確率を算出することを特徴とする請求項2記載の画像ベクトル化装置。 The image vectorization execution unit divides the input raster image into partial areas, calculates image feature amounts, and compares them with the image feature amounts of the sample raster images held in the image vectorization pattern database. 3. The image according to claim 2, wherein a vector image paired with the image feature quantity is selected as a candidate vector image of the partial area, and a candidate probability indicating a degree of fitness of the selected candidate vector image is calculated. Vectorization device.
  4.  画像ベクトル化実行部は、入力されたラスタ画像の各部分領域に対する候補ベクトル画像とその候補確率を用いて、注目する部分領域とその周辺の部分領域とで候補ベクトル画像の整合性を取りながら、当該注目する部分領域に対する候補ベクトル画像の中から最適なベクトル画像を選択することを特徴とする請求項3記載の画像ベクトル化装置。 The image vectorization execution unit uses the candidate vector image for each partial region of the input raster image and its candidate probability, while taking the consistency of the candidate vector image in the partial region of interest and the peripheral partial region, 4. The image vectorization apparatus according to claim 3, wherein an optimal vector image is selected from candidate vector images for the target partial region.
  5.  画像ベクトル化実行部は、注目する部分領域とその周辺の部分領域とで候補ベクトル画像の整合性を取るために、当該注目する部分領域とその周辺の部分領域の候補ベクトル画像間の境界の連続性を示す確率を算出し、前記確率に当該注目する部分領域の候補ベクトル画像の候補確率を掛け合わせてコスト関数を算出して、前記コスト関数を最大化する候補ベクトル画像を当該注目する部分領域に対する最適なベクトル画像として選択することを特徴とする請求項4記載の画像ベクトル化装置。 The image vectorization execution unit is configured to continuously match the boundary between candidate vector images of the target partial region and the peripheral partial region in order to ensure the consistency of the candidate vector image between the target partial region and the peripheral partial region. Calculating the probability indicating the sex, multiplying the probability by the candidate probability of the candidate vector image of the target partial region of interest, calculating a cost function, and determining the candidate vector image that maximizes the cost function as the target partial region of interest The image vectorization apparatus according to claim 4, wherein the image vectorization apparatus is selected as an optimal vector image for.
  6.  所定の方式に従って、サンプル用のラスタ画像をベクトル画像に変換する画像ベクトル化学習ステップと、
     前記サンプル用のラスタ画像とそのベクトル画像のペアを用いて、入力されたラスタ画像をベクトル画像に変換する画像ベクトル化実行ステップとを備える画像ベクトル化方法。
    An image vectorization learning step for converting a raster image for a sample into a vector image according to a predetermined method;
    An image vectorization method comprising: an image vectorization execution step of converting an input raster image into a vector image using a pair of the raster image for the sample and the vector image thereof.
  7.  所定の方式に従って、サンプル用のラスタ画像をベクトル画像に変換する画像ベクトル化学習手順と、
     前記サンプル用のラスタ画像とそのベクトル画像のペアを用いて、入力されたラスタ画像をベクトル画像に変換する画像ベクトル化実行手順とをコンピュータに実行させるための画像ベクトル化プログラム。
    An image vectorization learning procedure for converting a raster image for a sample into a vector image according to a predetermined method;
    An image vectorization program for causing a computer to execute an image vectorization execution procedure for converting an input raster image into a vector image using a pair of the sample raster image and its vector image.
PCT/JP2010/002671 2010-04-13 2010-04-13 Image vectorization device, image vectorization method, and image vectorization program WO2011128933A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2010/002671 WO2011128933A1 (en) 2010-04-13 2010-04-13 Image vectorization device, image vectorization method, and image vectorization program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2010/002671 WO2011128933A1 (en) 2010-04-13 2010-04-13 Image vectorization device, image vectorization method, and image vectorization program

Publications (1)

Publication Number Publication Date
WO2011128933A1 true WO2011128933A1 (en) 2011-10-20

Family

ID=44798329

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/002671 WO2011128933A1 (en) 2010-04-13 2010-04-13 Image vectorization device, image vectorization method, and image vectorization program

Country Status (1)

Country Link
WO (1) WO2011128933A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04260980A (en) * 1990-12-26 1992-09-16 Mitsubishi Electric Corp Device for recognizing graphic
JP2005182369A (en) * 2003-12-18 2005-07-07 Hitachi Ltd Generation method and device for vectorized figure
JP2006323511A (en) * 2005-05-17 2006-11-30 Hitachi Ltd Symbol-identifying method and device thereof
JP2008233840A (en) * 2007-02-21 2008-10-02 Fuji Xerox Co Ltd Image processor and image processing program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04260980A (en) * 1990-12-26 1992-09-16 Mitsubishi Electric Corp Device for recognizing graphic
JP2005182369A (en) * 2003-12-18 2005-07-07 Hitachi Ltd Generation method and device for vectorized figure
JP2006323511A (en) * 2005-05-17 2006-11-30 Hitachi Ltd Symbol-identifying method and device thereof
JP2008233840A (en) * 2007-02-21 2008-10-02 Fuji Xerox Co Ltd Image processor and image processing program

Similar Documents

Publication Publication Date Title
CN107330439B (en) Method for determining posture of object in image, client and server
WO2021036059A1 (en) Image conversion model training method, heterogeneous face recognition method, device and apparatus
US20200356818A1 (en) Logo detection
US8447114B2 (en) Method and apparatus for calculating pixel features of image data
JP5505409B2 (en) Feature point generation system, feature point generation method, and feature point generation program
CN112651438A (en) Multi-class image classification method and device, terminal equipment and storage medium
JP2018022360A (en) Image analysis device, image analysis method and program
US8582880B2 (en) Method and apparatus for calculating features of image data
JP5766620B2 (en) Object region detection apparatus, method, and program
JP6597914B2 (en) Image processing apparatus, image processing method, and program
JP5500400B1 (en) Image processing apparatus, image processing method, and image processing program
Ozbay et al. A hybrid method for skeleton extraction on Kinect sensor data: Combination of L1-Median and Laplacian shrinking algorithms
JP2016509805A (en) High frame rate of image stream
CN115953533A (en) Three-dimensional human body reconstruction method and device
JP6202938B2 (en) Image recognition apparatus and image recognition method
CN111414823B (en) Human body characteristic point detection method and device, electronic equipment and storage medium
US9008434B2 (en) Feature extraction device
CN115965788B (en) Point cloud semantic segmentation method based on multi-view image structural feature attention convolution
WO2011128933A1 (en) Image vectorization device, image vectorization method, and image vectorization program
US11288534B2 (en) Apparatus and method for image processing for machine learning
CN113627446A (en) Image matching method and system of feature point description operator based on gradient vector
CN113724329A (en) Object attitude estimation method, system and medium fusing plane and stereo information
JP2004021373A (en) Method and apparatus for estimating body and optical source information
JP2002140706A (en) Picture identification device and picture data processor
JP6375778B2 (en) Image processing method and image processing apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10849775

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10849775

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP