WO2012114386A1 - Image vectorization device, image vectorization method, and image vectorization program - Google Patents

Image vectorization device, image vectorization method, and image vectorization program Download PDF

Info

Publication number
WO2012114386A1
WO2012114386A1 PCT/JP2011/001106 JP2011001106W WO2012114386A1 WO 2012114386 A1 WO2012114386 A1 WO 2012114386A1 JP 2011001106 W JP2011001106 W JP 2011001106W WO 2012114386 A1 WO2012114386 A1 WO 2012114386A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
edge
partial
image
importance
Prior art date
Application number
PCT/JP2011/001106
Other languages
French (fr)
Japanese (ja)
Inventor
進也 田口
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2011/001106 priority Critical patent/WO2012114386A1/en
Publication of WO2012114386A1 publication Critical patent/WO2012114386A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/41Bandwidth or redundancy reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Definitions

  • the present invention relates to, for example, an image vectorization apparatus, an image vectorization method, and an image vectorization program for converting a raster image into a vector image.
  • Non-Patent Document 1 proposes an image vectorization method for the purpose of facilitating image editing and preventing blurring and collapse of the shape even when the image is enlarged or reduced. That is, the following Non-Patent Document 1 describes an image that realizes image vectorization by dividing a raster image into mesh-like partial areas and approximating pixel values in each partial area with a bicubic surface. A vectorization device is disclosed.
  • raster image refers to an image represented by a set of colored points
  • vector image represents an equation in which the image is represented by dots, lines, and planes, and color information is a parametric equation. This refers to the image represented by.
  • Image vectorization refers to converting a raster image into a vector image.
  • Point refers to a two-dimensional coordinate value, a three-dimensional coordinate value, or an N-dimensional coordinate value
  • Line refers to a straight line or curve connecting two points
  • Surface is surrounded by a plurality of lines. It refers to the area that is.
  • the present invention has been made to solve the above-described problems, and can dynamically change the amount of data used for image data in accordance with the reduction in scale or viewpoint, and can be applied to image data at random.
  • An object is to obtain an image vectorization apparatus, an image vectorization method and an image vectorization program which can be accessed.
  • An image vectorization apparatus detects an edge existing in a raster image, determines edge importance in the raster image, and detects an edge detected by the edge detection unit.
  • the edge having the highest importance determined by the edge detection means is used to divide the raster image into a plurality of partial areas, and the edges having lower importance than the edges are used in order,
  • an area dividing means for hierarchizing the divided partial areas, and the color approximating means for each partial area hierarchized by the area dividing means
  • the pixel value indicating the color of the pixels constituting the partial area is approximated by a continuous function.
  • the edge existing in the raster image is detected, the edge detection means for determining the importance of the edge in the raster image, and the edge detected among the edges detected by the edge detection means Using the edge with the highest importance determined by the detection means, the raster image is divided into a plurality of partial areas, and edges having a lower importance than the edges are used in order, so that the plurality of partial areas are divided.
  • an area dividing means for hierarchizing the divided partial areas is provided, and the color approximating means configures the partial areas for each partial area hierarchized by the area dividing means. Since the pixel value indicating the color of the current pixel is approximated by a continuous function, the amount of image data used can be changed dynamically according to the scale, viewpoint switching, etc. It is possible to, the effect capable of random access to image data.
  • FIG. 1 is a block diagram showing an image vectorization apparatus according to Embodiment 1 of the present invention.
  • an edge detection unit 1 is composed of, for example, a semiconductor integrated circuit in which a CPU, MPU, or GPU (Graphics Processing Unit) is mounted, or a one-chip microcomputer, and an edge existing in a raster image. Is detected, the importance of the edge in the raster image is determined, and edge information indicating the edge and importance information indicating the importance of the edge are output to the region dividing unit 2.
  • the edge detector 1 constitutes an edge detector.
  • the area dividing unit 2 is composed of, for example, a semiconductor integrated circuit on which a CPU, MPU, or GPU is mounted, or a one-chip microcomputer.
  • the edge dividing unit 2 Use the edge with the highest importance indicated by the output importance information to divide the raster image into multiple partial areas, and use the edges that are less important than the edges in order to By repeating the process of dividing into a plurality of partial areas, a process of hierarchizing the divided partial areas is performed.
  • the area dividing unit 2 constitutes an area dividing means.
  • the color approximating unit 3 is composed of, for example, a semiconductor integrated circuit on which a CPU, MPU, or GPU is mounted, or a one-chip microcomputer. For each partial region hierarchized by the region dividing unit 2, the partial region is A process of approximating the pixel value indicating the color of the constituent pixels with a continuous function is performed.
  • the color approximating unit 3 constitutes color approximating means.
  • the vector image data storage unit 4 is composed of a storage device such as a RAM or a hard disk, for example.
  • the vector image data generated by the image vectorization device for example, region information indicating partial areas hierarchized by the region dividing unit 2) Or an area access table indicating the correspondence between the image coordinates of the raster image and the partial areas hierarchized by the area dividing unit 2.
  • FIG. 2 is a flowchart showing the processing contents (image vectorization method) of the image vectorization apparatus according to Embodiment 1 of the present invention.
  • FIG. 3 is an explanatory diagram showing edge detection processing and importance determination processing by the edge detection unit 1.
  • the edge detection processing and importance determination processing by the edge detection unit 1 will be described in detail.
  • the edge detection unit 1 When a raster image is input, the edge detection unit 1 reduces the raster image to a plurality of scales as shown in FIG. 3A before detecting an edge existing in the raster image. A plurality of reduced images are generated. In the example of FIG. 3A, a reduced image having a size that is 1 ⁇ 2 of a raster image and a reduced image having a size that is 1 ⁇ 4 are generated. When the raster image is reduced to a plurality of scales to generate a reduced image, the edge detection unit 1 detects edges existing in the plurality of scale images (including the original raster image).
  • an edge existing in a full-size raster image, an edge existing in a reduced image of 1/2 size, and an edge existing in a reduced image of 1/4 size are detected.
  • the edge detection can use an existing method such as the Canny method, for example.
  • the edge detection unit 1 determines the importance of the edge. For example, an edge that roughly captures the outline characteristics of an object appearing in the image is determined to be a highly important edge, and an edge that captures details of the object outline in the image is a low importance edge. Is determined.
  • the edge present in the reduced image of 1/4 size which is the smallest scale image is determined to be the edge having the highest importance.
  • importance 3 (the highest importance) is assigned to the edge.
  • the edges existing in the reduced image of 1 ⁇ 2 size are the edges that are not present in the reduced image of 1 ⁇ 4 size (edges not assigned importance 3) are The edge is determined to be an intermediate edge, and importance 2 (medium importance) is assigned to the edge.
  • the edges that are not present in the reduced images of 1/2 and 1/4 size Is determined to be the edge with the lowest importance, and importance 1 (the lowest importance) is assigned to the edge.
  • the highest importance level 3 is given to the edges detected from all the scale images, and the lowest importance level 1 is given to the edges detected only from the original size raster image.
  • the edge detection unit 1 performs the edge detection process and the importance determination process, the edge detection unit 1 obtains edge information indicating an edge existing in each scale image and importance information indicating the importance assigned to the edge.
  • the data is output to the area dividing unit 2.
  • the area dividing unit 2 Upon receiving edge information and importance information from the edge detection unit 1, the area dividing unit 2 refers to the edge information and importance information, divides the raster image into a plurality of partial areas (step ST3), The process of dividing the partial area into a plurality of partial areas is repeatedly performed (steps ST4 and ST5). That is, the region dividing unit 2 refers to the importance level information output from the edge detection unit 1 and identifies an edge of importance level 3 among the edges detected by the edge detection unit 1. The raster image is divided into a plurality of partial regions using the edges (step ST3).
  • an importance level 2 edge that is one level lower than the importance level 3 is identified, and the above-mentioned partial image (the importance level 3 edge) is used by using the importance level 2 edge.
  • an edge of importance 1 that is one less important than importance 2 is specified, and the partial image (importance 3, 2) is used using the edge of importance 1.
  • steps ST4 and ST5 are divided into a plurality of partial areas. Since there is no edge having a lower importance level than the importance level 1, the repetition process of steps ST4 and ST5 is terminated.
  • FIG. 4 is an explanatory diagram showing the area dividing process and the partial area hierarchizing process by the area dividing unit 2.
  • the region dividing unit 2 uses a “edge of importance 3” having the highest importance (see FIG. 4A) to perform a raster image dividing process, so that a plurality of raster images are obtained.
  • a “edge of importance 3” having the highest importance see FIG. 4A
  • R1, R2, R3, and R4 see FIG. 4B.
  • FIG. 5 is an explanatory diagram showing a region dividing method of the region dividing unit 2.
  • the area dividing unit 2 obtains inflection points, branch points (including intersection points), and end points of edges existing in the image.
  • branch points including intersection points
  • end points of edges existing in the image In the example of FIG. 5, the end point of the edge (1), the intersection of the edge (1) and the edge (2), and the inflection point of the edge (2) are obtained.
  • the area dividing unit 2 obtains area dividing lines passing through horizontal or vertical edges existing in the image, and divides the image by the area dividing lines.
  • the edge (1) which is a horizontal edge exists in the image since the edge (1) which is a horizontal edge exists in the image, the area dividing line passing through the edge (1) is obtained. In the example of FIG. 5, no vertical edge exists in the image.
  • the area dividing unit 2 obtains a horizontal or vertical area dividing line passing through the end points of the edges, and divides the image by the area dividing line.
  • a vertical area dividing line passing through the end point of the edge (1) is obtained.
  • the horizontal area dividing line passing through the end point of the edge (1) overlaps with the area dividing line in FIG.
  • the area dividing unit 2 obtains a horizontal or vertical area dividing line passing through the intersection of the edges, and divides the image by the area dividing line.
  • a vertical area dividing line passing through the intersection of the edge (1) and the edge (2) is obtained.
  • the horizontal area dividing line passing through the intersection of the edge (1) and the edge (2) overlaps with the area dividing line in FIG.
  • the region dividing unit 2 obtains a horizontal or vertical region dividing line passing through the inflection point of the edge, and divides the image by the region dividing line.
  • the horizontal area dividing line passing through the inflection point of the edge (2) and the vertical area dividing line are obtained. Yes.
  • the area dividing unit 2 repeatedly performs the operations shown in FIGS. 5B to 5E until the number of curved edges becomes one in each rectangular partial area that is an image after division. In the operations shown in FIGS. 5B to 5E, when a new area dividing line is newly added, the existing area dividing line may be crossed or may not be crossed.
  • the region dividing unit 2 obtains an approximate curve by approximating a diagonal line or a curve included in the partial region to a quadratic curve or a cubic curve.
  • the internal line information indicating the type of function indicating the approximate curve and the parameter of the function is set as one of the area information (attribute information) of the partial area.
  • the types of diagonal lines and curves included in the rectangular partial region are not limited, but the region division may be performed by limiting the types of curves to a certain pattern.
  • FIG. 6 is an explanatory diagram showing another area dividing method of the area dividing unit 2.
  • the image may be divided into a plurality of partial areas using inflection points, branch points (including intersections), and end points of edges existing in the image.
  • the image may be divided into a plurality of partial regions using any one of the region pattern (1), the region pattern (2), and the region pattern (3) as shown in A).
  • the area pattern (1) is an area not including a dividing line
  • the area pattern (2) is an area including a dividing line described by a curve or a straight line passing through the upper right corner and the lower left corner
  • the area pattern (3) is an area including a dividing line described by a curve or a straight line passing through the upper left corner and the lower right corner.
  • the region dividing unit 2 matches two edges (1) and edge (2) when two edges (1) and edge (2) exist in the image. Then, for each small region that is a partial region, one of the region pattern (1), the region pattern (2), and the region pattern (3) is selected (for example, a small region that does not have edges (1) and (2)).
  • the area pattern (1) is selected, and in the small area where the edge (2) exists, the area pattern (2) or the area pattern (3) is selected according to the direction of the curve or the straight line).
  • the region patterns of the selected small regions By combining the region patterns of the selected small regions, a region division result as shown in FIG. 6C is obtained.
  • the types of curves in each partial region can be limited, so that the advantage of reducing the number of parameters of the curve can be obtained.
  • the area dividing unit 2 uses the “importance 3 edge” having the highest importance (see FIG. 4A) to divide the raster image into partial areas R1, R2, R3, and R4. Using the “importance 2 edge” having a high importance (see FIG. 4A), the partial area is divided into a plurality of partial areas by performing the division process (see FIG. 4). (See (B)). In the example of FIG. 4, since an edge of importance 2 exists in the partial region R4, the partial region R4 is divided into partial regions R5, R6, and R7.
  • the region dividing unit 2 uses the “edge of importance 2” (see FIG. 4A) to divide the partial region R4 into the partial regions R5, R6, and R7, and the “importance” having the lowest importance is obtained.
  • the edge of “1” see FIG. 4A
  • the partial area is divided to be divided into a plurality of partial areas (see FIG. 4B).
  • the edge of importance 1 crosses the partial regions R1, R2, and R3
  • the partial region R1 is divided into the partial region R8 and the partial region R11
  • the partial region R2 is divided into the partial region R9 and the partial region.
  • the partial region R3 is divided into a partial region R10 and a partial region R13.
  • the area dividing unit 2 performs a hierarchizing process on the divided partial areas R1 to R13 (step ST6).
  • the partial areas R1, R2, R3, and R4 divided using only the “importance 3 edge” having the highest importance belong to the hierarchy (1) (the highest hierarchy).
  • the hierarchy number indicating the hierarchy (1) is assigned to the partial regions R1, R2, R3, and R4.
  • the subregions R5, R6, and R7 divided by adding the “edge of importance 2” are defined as belonging to the hierarchy (2) (middle hierarchy), and the hierarchy number indicating the hierarchy (2) Is applied to the partial regions R5, R6, and R7.
  • the partial areas R8, R9, R10, R11, R12, and R13 divided by adding the “edge of importance 1” are defined as belonging to the hierarchy (3) (the lowest hierarchy), and the hierarchy ( 3) is assigned to the partial areas R8, R9, R10, R11, R12, and R13.
  • the region dividing unit 2 assigns the layer numbers to the partial regions R1 to R13, the partial regions R1 to R13 belonging to the layers (1) to (3) are linked to each other between the corresponding partial regions.
  • the partial regions R5, R6, and R7 are regions divided from the partial region R4, there is a correspondence between the partial regions R5, R6, and R7 and the partial region R4, as shown in FIG.
  • the partial region R4 and the partial regions R5, R6, and R7 are linked.
  • partial areas R8, R11 are areas divided from the partial area R1, there is a correspondence between the partial areas R8, R11 and the partial area R1, and as shown in FIG.
  • the region R1 and the partial regions R8 and R11 are linked.
  • the partial regions R9 and R12 are regions divided from the partial region R2, there is a correspondence between the partial regions R9 and R12 and the partial region R2, and as shown in FIG.
  • the partial area R2 and the partial areas R9 and R12 are linked.
  • the partial regions R10 and R13 are regions divided from the partial region R3, there is a correspondence between the partial regions R10 and R13 and the partial region R3, and as shown in FIG.
  • the partial area R3 and the partial areas R10 and R13 are linked.
  • the color approximating unit 3 displays, for each partial area hierarchized by the area dividing unit 2, a pixel value indicating the color of the pixel constituting the partial area Is approximated by a continuous function.
  • FIG. 7 is an explanatory diagram showing attribute setting processing such as a side surrounding a partial region and sampling processing of pixel values by the color approximation unit 3.
  • FIG. 8 is an explanatory diagram showing a local coordinate system (U, V) of sampling points.
  • U, V local coordinate system
  • the color approximating unit 3 sets the attribute of the side surrounding the partial region to “continuous side” or “discontinuous side” (step ST7). That is, the color approximating unit 3 determines whether or not an edge exists on the side surrounding the partial region, and sets the attribute of the side on which the edge exists to “discontinuous side”. On the other hand, the attribute of the side where no edge exists is set to “continuous side”. For example, when attention is paid to the partial region R1 shown in FIG. 4, since there is an edge at the boundary between the partial region R1 and the partial region R2 (see FIGS. 4A and 4B), there is an edge on the right side of the partial region R1.
  • the side attribute surrounding the partial area is set by the presence or absence of the edge, but the side attribute may be set based on the continuity of the color near the side surrounding the partial area. That is, the color approximating unit 3 determines the difference between the pixel value inside the partial area (the pixel value indicating the color of the pixel constituting the partial area) and the pixel value in the peripheral area (adjacent area) of the partial area. If the difference between the pixel values is equal to or greater than a predetermined value, the attribute of the side surrounding the partial area is set to “discontinuous side”. On the other hand, if the difference between the pixel values is less than the predetermined value, the attribute of the side surrounding the partial area is set to “continuous side”. As a method for determining the continuity of pixel values, for example, the Euclidean distance of a set of pixel values around adjacent sides may be calculated.
  • the attribute of the upper side and the right side of the partial region Z1 is set to “discontinuous side”.
  • the attribute of the lower and left sides of the partial region Z1 is set to “continuous side”.
  • the color approximating unit 3 always sets the attribute of the diagonal line or curve included in each partial region to “discontinuous side”.
  • the color approximating unit 3 determines the pixel value to be sampled according to the setting state of the attribute (step ST8). .
  • the attributes of the upper and right sides of the partial area Z1 are “discontinuous sides”, and the attributes of the lower and left sides are “continuous sides”.
  • the partial area Z1 includes a diagonal line L, and the attribute of the diagonal line L is “discontinuous side”. For this reason, the partial region Z1 is divided into an upper part and a lower part by an oblique line L.
  • the lower part of the partial area Z1 Since the upper part of the partial area Z1 is surrounded by a line having the attribute of “discontinuous side”, when approximating the pixel value indicating the color of the pixels constituting the upper part with a continuous function, the lower part of the partial area Z1 In addition, it is desirable not to mix with the colors in the upper and right areas of the partial area Z1. For this reason, only the pixel value of the pixel constituting the upper part is determined as the pixel value to be sampled. Specifically, the pixel value at the position indicated by “ ⁇ ” in FIG. 7B is determined as a sampling target.
  • the attribute of the lower side and the left side of the lower part of the partial area Z1 is “continuous side”, when the pixel value indicating the color of the pixels constituting the lower part is approximated by a continuous function, the lower and left sides of the partial area Z1 It is desirable to change smoothly between colors in the region. For this reason, the pixel values of the pixels constituting the lower part and the partial pixel values in the lower and left areas of the partial area Z1 are determined as sampling target pixel values. Specifically, the pixel value at the position indicated by “ ⁇ ” in FIG. 7B is determined as a sampling target.
  • the color approximation unit 3 determines the pixel value to be sampled for each partial region hierarchized by the region dividing unit 2, the color approximation unit 3 samples the pixel value to be sampled and approximates the pixel value with a continuous function ( Step ST9).
  • a local coordinate system (U, V) of a surface having point 1 as the origin (0, 0) is defined within a certain region.
  • u and v are real parameters between “0” and “1”
  • the color and luminance information of the point (u, v) sampled by the color approximating unit 3 can be approximated by an arbitrary parametric function F (u, v).
  • a Bezier curved surface or “Ferguson patch” described in Non-Patent Document 1 can be used.
  • a sigmoid function F (u, v) that can express a steep edge can be used.
  • F (u, v) 1 / (1 + exp (a ⁇ u + b ⁇ v + c))
  • a, b, and c are constants and may be arbitrarily specified by the user.
  • FIG. 9 is an explanatory diagram showing a configuration example of vector image data.
  • the vector image data includes area information and an area access table.
  • the area information is generated for each partial area belonging to each hierarchy, and the area access table is generated for each hierarchy. For example, when there are M layers (1) to (M), M area access tables are generated.
  • M M area access tables are generated.
  • the area information for example, as shown in FIG. 4, there are 4 partial areas belonging to the hierarchy (1), 3 partial areas belonging to the hierarchy (2), and 6 partial areas belonging to the hierarchy (3). In some cases, four area information is generated in the hierarchy (1), three area information is generated in the hierarchy (2), and six area information is generated in the hierarchy (3).
  • the area information is information composed of boundary line information, boundary line attribute information, color information, link area information, and internal line information.
  • the boundary line information is coordinate information generated by the area dividing unit 2, and this boundary line information includes a boundary line (a side surrounding the partial area, an oblique line or a curve included in the partial area). This is information indicating the position.
  • the boundary line attribute information is attribute information generated by the region dividing unit 2, and the boundary line attribute information is information indicating the attribute of the boundary line. For example, it indicates whether a side, a curve, or the like surrounding the partial region is a “continuous side” or a “discontinuous side”.
  • the color information is information relating to the color generated by the color approximating unit 3, and this color information is information indicating the type of continuous function approximating the pixel value of the partial area and the parameters of the continuous function.
  • the link area information is information indicating the hierarchical relationship of the partial areas generated by the area dividing unit 2, and this link area information is an area for identifying a partial area of an upper hierarchy or a lower hierarchy linked to a certain partial area. This is information indicating a number, a hierarchy number indicating a hierarchy to which each partial area belongs, and the like.
  • the internal line information is information related to the internal line generated by the area dividing unit 2, and the internal line information includes the type of function indicating the oblique line or the approximate curve of the curve included in the partial area, and the function Information indicating parameters.
  • the area access table is a look-up table generated by the area dividing unit 2, and this area access table shows the correspondence between the image coordinates of the raster image and the partial areas hierarchized.
  • FIG. 10 is an explanatory diagram of a configuration example of the area access table.
  • an external device for example, a car navigation device
  • the vector image data stored in the vector image data storage unit 4 is randomly accessed by referring to the area access table. A method for acquiring the pixel value will be described.
  • a 9 ⁇ 9 pixel raster image is divided into partial regions R1, R2, R3, and R4 as shown in FIG.
  • a minimum size table region access table
  • a region number indicating the partial region is substituted into the minimum size table.
  • the minimum size for maintaining the ratio of each partial area is 3 ⁇ 3 pixels (the size of the original raster image reduced to 1/3). Numbers 1 to 4 are assigned.
  • the external device rounds the image coordinates (6, 5) to 1/3, which is the ratio of the area access table to the raster image, and calculates the rounded value (2, 2).
  • the rounded value (2, 2) is used as the coordinate value of the area access table, and the coordinates are set to the coordinates (2, 2) from the area access table shown in FIG.
  • the assigned area number “4” is acquired.
  • the region information for example, color information, internal line
  • Information, etc. is read out, and the color of each pixel constituting the partial area (a part of the raster image including the image coordinates (6, 5)) is determined and drawn according to the area information.
  • the edge detection unit 1 that detects an edge existing in a raster image and determines the importance of the edge in the raster image, and edge detection Among the edges detected by the section 1, the edge having the highest importance determined by the edge detection section 1 is used to divide the raster image into a plurality of partial areas, and the importance is lower than that edge.
  • an area dividing unit 2 for hierarchizing the divided partial areas is provided, and the color approximating unit 3 is an area dividing unit For each partial area hierarchized by 2, the pixel value indicating the color of the pixels constituting the partial area is approximated by a continuous function. It is possible to dynamically change the used data amount of over data, an effect that may be random access to image data.
  • an external device for example, a car navigation device
  • map at a certain scale or viewpoint For example, area information necessary for drawing can be dynamically changed.
  • any component of the embodiment can be modified or any component of the embodiment can be omitted within the scope of the invention.
  • the present invention is suitable for an image vectorization apparatus that needs to be able to draw a desired image even in a car navigation apparatus that has a small amount of available memory.
  • 1 edge detection unit edge detection means
  • 2 region division unit region division unit
  • 3 color approximation unit color approximation unit
  • 4 vector image data storage unit 4 vector image data storage unit.

Abstract

An image vectorization device is provided with: an edge detection unit (1) for detecting edges present in a raster image and identifying the degrees of importance of the edges in the raster image; and a region dividing unit (2) for dividing the raster image into a plurality of partial regions by using the edges with the highest degree of importance identified by the edge detection unit (1) among the edges detected by the edge detection unit (1), repeating a process for dividing the partial regions into a plurality of another partial regions by sequentially using the edges with a lower degree of importance than the edges with the highest degree of importance, and thereby hierarchizing the partial regions after being divided. A color approximation unit (3) approximates, per each of the partial regions hierarchized by the region dividing unit (2), the pixel values indicating the colors of pixels constituting the partial regions by using continuous functions.

Description

画像ベクトル化装置、画像ベクトル化方法及び画像ベクトル化プログラムImage vectorization apparatus, image vectorization method, and image vectorization program
 この発明は、例えば、ラスタ画像をベクトル画像に変換する画像ベクトル化装置、画像ベクトル化方法及び画像ベクトル化プログラムに関するものである。 The present invention relates to, for example, an image vectorization apparatus, an image vectorization method, and an image vectorization program for converting a raster image into a vector image.
 例えば、以下の非特許文献1では、画像の編集の容易化や、画像を拡大・縮小しても形のボケや崩れを防止する目的で、画像ベクトル化の手法を提案している。
 即ち、以下の非特許文献1には、ラスタ画像をメッシュ状の部分領域に分割して、各部分領域内の画素値を双三次曲面で近似することで、画像ベクトル化を実現している画像ベクトル化装置が開示されている。
For example, the following Non-Patent Document 1 proposes an image vectorization method for the purpose of facilitating image editing and preventing blurring and collapse of the shape even when the image is enlarged or reduced.
That is, the following Non-Patent Document 1 describes an image that realizes image vectorization by dividing a raster image into mesh-like partial areas and approximating pixel values in each partial area with a bicubic surface. A vectorization device is disclosed.
 ここで、「ラスタ画像」は、色のついた点の集合で表現されている画像を指し、「ベクトル画像」は、画像が点、線及び面で表現されており、色情報がパラメトリックな方程式で表現されている画像を指すものである。
 また、「画像ベクトル化」は、ラスタ画像をベクトル画像に変換することを指すものである。
 なお、「点」は2次元座標値や3次元座標値、あるいは、N次元の座標値を指し、「線」は二つの点を結ぶ直線や曲線を指し、「面」は複数の線で囲まれている領域を指すものである。
Here, “raster image” refers to an image represented by a set of colored points, and “vector image” represents an equation in which the image is represented by dots, lines, and planes, and color information is a parametric equation. This refers to the image represented by.
“Image vectorization” refers to converting a raster image into a vector image.
“Point” refers to a two-dimensional coordinate value, a three-dimensional coordinate value, or an N-dimensional coordinate value, “Line” refers to a straight line or curve connecting two points, and “Surface” is surrounded by a plurality of lines. It refers to the area that is.
 従来の画像ベクトル化装置は以上のように構成されているので、ラスタ画像を忠実に再現できるように画像ベクトル化が行われる。このため、部分領域の分割数が多い場合、各部分領域の位置情報などを表現するパラメータの数が増加してしまって、ベクトル化された画像データのサイズが大きくなる課題があった。
 また、グラフィックのプログラマブルシェーダを利用するテクスチャマッピングでは、画像を描画する際に、ベクトル化されている画像データに対してランダムアクセス(画像全体を再構成することなく、指定のテクスチャ座標の画素値を取得すること)できる必要があるが、各部分領域内の画素値を双三次曲面で近似しているため、ランダムアクセスの計算が困難であるなどの課題があった。
Since the conventional image vectorization apparatus is configured as described above, image vectorization is performed so that a raster image can be faithfully reproduced. For this reason, when the number of partial areas is large, the number of parameters expressing the position information of each partial area is increased, resulting in an increase in the size of vectorized image data.
Also, in texture mapping using a graphic programmable shader, when rendering an image, random access to the vectorized image data (without reconfiguring the entire image, the pixel value of the specified texture coordinate is changed. However, since the pixel values in each partial area are approximated by a bicubic surface, there is a problem that it is difficult to calculate random access.
 この発明は上記のような課題を解決するためになされたもので、縮尺や視点の切り替えなどに応じて画像データの使用データ量を動的に変更することができるとともに、画像データに対してランダムアクセスすることができる画像ベクトル化装置、画像ベクトル化方法及び画像ベクトル化プログラムを得ることを目的とする。 The present invention has been made to solve the above-described problems, and can dynamically change the amount of data used for image data in accordance with the reduction in scale or viewpoint, and can be applied to image data at random. An object is to obtain an image vectorization apparatus, an image vectorization method and an image vectorization program which can be accessed.
 この発明に係る画像ベクトル化装置は、ラスタ画像内に存在しているエッジを検出するとともに、そのラスタ画像における上記エッジの重要度を判別するエッジ検出手段と、エッジ検出手段により検出されたエッジの中で、エッジ検出手段により判別された重要度が最も高いエッジを使用して、そのラスタ画像を複数の部分領域に分割するとともに、そのエッジより重要度が低いエッジを順番に使用して、上記部分領域を複数の部分領域に分割する処理を繰り返すことで、分割後の部分領域を階層化する領域分割手段とを設け、色近似手段が領域分割手段により階層化された部分領域毎に、当該部分領域を構成している画素の色を示す画素値を連続関数で近似するようにしたものである。 An image vectorization apparatus according to the present invention detects an edge existing in a raster image, determines edge importance in the raster image, and detects an edge detected by the edge detection unit. Among them, the edge having the highest importance determined by the edge detection means is used to divide the raster image into a plurality of partial areas, and the edges having lower importance than the edges are used in order, By repeating the process of dividing the partial area into a plurality of partial areas, there is provided an area dividing means for hierarchizing the divided partial areas, and the color approximating means for each partial area hierarchized by the area dividing means The pixel value indicating the color of the pixels constituting the partial area is approximated by a continuous function.
 この発明によれば、ラスタ画像内に存在しているエッジを検出するとともに、そのラスタ画像における上記エッジの重要度を判別するエッジ検出手段と、エッジ検出手段により検出されたエッジの中で、エッジ検出手段により判別された重要度が最も高いエッジを使用して、そのラスタ画像を複数の部分領域に分割するとともに、そのエッジより重要度が低いエッジを順番に使用して、上記部分領域を複数の部分領域に分割する処理を繰り返すことで、分割後の部分領域を階層化する領域分割手段とを設け、色近似手段が領域分割手段により階層化された部分領域毎に、当該部分領域を構成している画素の色を示す画素値を連続関数で近似するように構成したので、縮尺や視点の切り替えなどに応じて画像データの使用データ量を動的に変更することができるとともに、画像データに対してランダムアクセスすることができる効果がある。 According to the present invention, the edge existing in the raster image is detected, the edge detection means for determining the importance of the edge in the raster image, and the edge detected among the edges detected by the edge detection means Using the edge with the highest importance determined by the detection means, the raster image is divided into a plurality of partial areas, and edges having a lower importance than the edges are used in order, so that the plurality of partial areas are divided. By repeating the process of dividing into partial areas, an area dividing means for hierarchizing the divided partial areas is provided, and the color approximating means configures the partial areas for each partial area hierarchized by the area dividing means. Since the pixel value indicating the color of the current pixel is approximated by a continuous function, the amount of image data used can be changed dynamically according to the scale, viewpoint switching, etc. It is possible to, the effect capable of random access to image data.
この発明の実施の形態1による画像ベクトル化装置を示す構成図である。It is a block diagram which shows the image vectorization apparatus by Embodiment 1 of this invention. この発明の実施の形態1による画像ベクトル化装置の処理内容(画像ベクトル化方法)を示すフローチャートである。It is a flowchart which shows the processing content (image vectorization method) of the image vectorization apparatus by Embodiment 1 of this invention. エッジ検出部1によるエッジ検出処理と重要度判別処理を示す説明図である。It is explanatory drawing which shows the edge detection process by the edge detection part 1, and importance determination process. 領域分割部2による領域分割処理と部分領域の階層化処理を示す説明図である。It is explanatory drawing which shows the area division process by the area division part 2, and the hierarchization process of a partial area. 領域分割部2の領域分割方法を示す説明図である。It is explanatory drawing which shows the area division method of the area division part. 領域分割部2の別の領域分割方法を示す説明図である。It is explanatory drawing which shows another area dividing method of the area dividing part. 色近似部3による部分領域を囲む辺等の属性設定処理及び画素値のサンプリング処理を示す説明図である。It is explanatory drawing which shows the attribute setting process of the edge etc. which surround the partial area | region by the color approximation part 3, and the sampling process of a pixel value. サンプリング点がある面の局所座標系(U,V)を示す説明図である。It is explanatory drawing which shows the local coordinate system (U, V) of the surface with a sampling point. ベクトル画像データの構成例を示す説明図である。It is explanatory drawing which shows the structural example of vector image data. 領域アクセステーブルの構成例を示す説明図である。It is explanatory drawing which shows the structural example of an area | region access table. カーナビゲーション装置による画像の描画処理を示す説明図である。It is explanatory drawing which shows the drawing process of the image by a car navigation apparatus.
 以下、この発明をより詳細に説明するために、この発明を実施するための形態について、添付の図面に従って説明する。
実施の形態1.
 図1はこの発明の実施の形態1による画像ベクトル化装置を示す構成図である。
 図1において、エッジ検出部1は例えばCPU,MPU又はGPU(Graphics Processing Unit)を実装している半導体集積回路、あるいは、ワンチップマイコンなどから構成されており、ラスタ画像内に存在しているエッジを検出するとともに、そのラスタ画像における上記エッジの重要度を判別し、そのエッジを示すエッジ情報と当該エッジの重要度を示す重要度情報を領域分割部2に出力する処理を実施する。なお、エッジ検出部1はエッジ検出手段を構成している。
Hereinafter, in order to explain the present invention in more detail, modes for carrying out the present invention will be described with reference to the accompanying drawings.
Embodiment 1 FIG.
FIG. 1 is a block diagram showing an image vectorization apparatus according to Embodiment 1 of the present invention.
In FIG. 1, an edge detection unit 1 is composed of, for example, a semiconductor integrated circuit in which a CPU, MPU, or GPU (Graphics Processing Unit) is mounted, or a one-chip microcomputer, and an edge existing in a raster image. Is detected, the importance of the edge in the raster image is determined, and edge information indicating the edge and importance information indicating the importance of the edge are output to the region dividing unit 2. The edge detector 1 constitutes an edge detector.
 領域分割部2は例えばCPU,MPU又はGPUを実装している半導体集積回路、あるいは、ワンチップマイコンなどから構成されており、エッジ検出部1により検出されたエッジの中で、エッジ検出部1から出力された重要度情報が示す重要度が最も高いエッジを使用して、ラスタ画像を複数の部分領域に分割するとともに、そのエッジより重要度が低いエッジを順番に使用して、その部分領域を複数の部分領域に分割する処理を繰り返すことで、分割後の部分領域を階層化する処理を実施する。なお、領域分割部2は領域分割手段を構成している。 The area dividing unit 2 is composed of, for example, a semiconductor integrated circuit on which a CPU, MPU, or GPU is mounted, or a one-chip microcomputer. Among the edges detected by the edge detecting unit 1, the edge dividing unit 2 Use the edge with the highest importance indicated by the output importance information to divide the raster image into multiple partial areas, and use the edges that are less important than the edges in order to By repeating the process of dividing into a plurality of partial areas, a process of hierarchizing the divided partial areas is performed. The area dividing unit 2 constitutes an area dividing means.
 色近似部3は例えばCPU,MPU又はGPUを実装している半導体集積回路、あるいは、ワンチップマイコンなどから構成されており、領域分割部2により階層化された部分領域毎に、当該部分領域を構成している画素の色を示す画素値を連続関数で近似する処理を実施する。なお、色近似部3は色近似手段を構成している。 The color approximating unit 3 is composed of, for example, a semiconductor integrated circuit on which a CPU, MPU, or GPU is mounted, or a one-chip microcomputer. For each partial region hierarchized by the region dividing unit 2, the partial region is A process of approximating the pixel value indicating the color of the constituent pixels with a continuous function is performed. The color approximating unit 3 constitutes color approximating means.
 ベクトル画像データ格納部4は例えばRAMやハードディスクなどの記憶装置から構成されており、画像ベクトル化装置により生成されたベクトル画像データ(例えば、領域分割部2により階層化された部分領域を示す領域情報や、ラスタ画像の画像座標と領域分割部2により階層化された部分領域の対応関係を示す領域アクセステーブルなど)を格納する。 The vector image data storage unit 4 is composed of a storage device such as a RAM or a hard disk, for example. The vector image data generated by the image vectorization device (for example, region information indicating partial areas hierarchized by the region dividing unit 2) Or an area access table indicating the correspondence between the image coordinates of the raster image and the partial areas hierarchized by the area dividing unit 2.
 図1の例では、画像ベクトル化装置の構成要素であるエッジ検出部1、領域分割部2及び色近似部3のそれぞれが専用のハードウェアで構成されているものを想定しているが、画像ベクトル化装置がコンピュータで構成される場合、エッジ検出部1、領域分割部2及び色近似部3の処理内容を記述している画像ベクトル化プログラムをコンピュータのメモリに格納し、当該コンピュータのCPUが当該メモリに格納されている画像ベクトル化プログラムを実行するようにしてもよい。
 図2はこの発明の実施の形態1による画像ベクトル化装置の処理内容(画像ベクトル化方法)を示すフローチャートである。
In the example of FIG. 1, it is assumed that each of the edge detection unit 1, the region division unit 2, and the color approximation unit 3 that is a component of the image vectorization device is configured by dedicated hardware. When the vectorization apparatus is configured by a computer, an image vectorization program describing the processing contents of the edge detection unit 1, the region division unit 2, and the color approximation unit 3 is stored in a computer memory, and the CPU of the computer The image vectorization program stored in the memory may be executed.
FIG. 2 is a flowchart showing the processing contents (image vectorization method) of the image vectorization apparatus according to Embodiment 1 of the present invention.
 次に動作について説明する。
 エッジ検出部1は、ラスタ画像を入力すると、そのラスタ画像内に存在しているエッジを検出する(ステップST1)。
 エッジ検出部1は、ラスタ画像内に存在しているエッジを検出すると、そのラスタ画像における上記エッジの重要度を判別する(ステップST2)。
 ここで、図3はエッジ検出部1によるエッジ検出処理と重要度判別処理を示す説明図である。
 以下、エッジ検出部1によるエッジ検出処理と重要度判別処理を具体的に説明する。
Next, the operation will be described.
When the raster image is input, the edge detection unit 1 detects an edge existing in the raster image (step ST1).
When the edge detection unit 1 detects an edge present in the raster image, the edge detection unit 1 determines the importance of the edge in the raster image (step ST2).
Here, FIG. 3 is an explanatory diagram showing edge detection processing and importance determination processing by the edge detection unit 1.
Hereinafter, the edge detection processing and importance determination processing by the edge detection unit 1 will be described in detail.
 エッジ検出部1は、ラスタ画像を入力すると、そのラスタ画像内に存在しているエッジを検出する前に、図3(A)に示すように、そのラスタ画像を複数の縮尺に縮小して、複数の縮小画像を生成する。
 図3(A)の例では、ラスタ画像の1/2のサイズの縮小画像と、1/4のサイズの縮小画像を生成している。
 エッジ検出部1は、ラスタ画像を複数の縮尺に縮小して縮小画像を生成すると、複数の縮尺の画像(原寸のラスタ画像を含む)内に存在しているエッジを検出する。即ち、原寸のラスタ画像内に存在しているエッジ、1/2のサイズの縮小画像内に存在しているエッジ、1/4のサイズの縮小画像内に存在しているエッジを検出する。
 なお、エッジの検出は、例えば、Canny法などの既存の手法を利用することができる。
When a raster image is input, the edge detection unit 1 reduces the raster image to a plurality of scales as shown in FIG. 3A before detecting an edge existing in the raster image. A plurality of reduced images are generated.
In the example of FIG. 3A, a reduced image having a size that is ½ of a raster image and a reduced image having a size that is ¼ are generated.
When the raster image is reduced to a plurality of scales to generate a reduced image, the edge detection unit 1 detects edges existing in the plurality of scale images (including the original raster image). That is, an edge existing in a full-size raster image, an edge existing in a reduced image of 1/2 size, and an edge existing in a reduced image of 1/4 size are detected.
Note that the edge detection can use an existing method such as the Canny method, for example.
 エッジ検出部1は、複数の縮小画像内に存在しているエッジを検出すると、そのエッジの重要度を判別する。
 例えば、画像に写る物体の輪郭の特徴を大まかに捉えているエッジは重要度が高いエッジであると判別し、画像に写る物体の輪郭の細部を捉えているエッジは重要度が低いエッジであると判別する。
When the edge detection unit 1 detects an edge existing in a plurality of reduced images, the edge detection unit 1 determines the importance of the edge.
For example, an edge that roughly captures the outline characteristics of an object appearing in the image is determined to be a highly important edge, and an edge that captures details of the object outline in the image is a low importance edge. Is determined.
 具体的には、図3(B)に示すように、一番小さい縮尺の画像である1/4のサイズの縮小画像内に存在しているエッジは、重要度が最も高いエッジであると判別し、当該エッジに対して重要度3(最も高い重要度)を付与する。
 次に、1/2のサイズの縮小画像内に存在しているエッジの中で、1/4のサイズの縮小画像内には存在していないエッジ(重要度3を付与していないエッジ)は、重要度が中程度のエッジであると判別し、当該エッジに対して重要度2(中程度の重要度)を付与する。
 最後に、原寸のラスタ画像内に存在しているエッジの中で、1/2及び1/4のサイズの縮小画像内には存在していないエッジ(重要度3,2を付与していないエッジ)は、重要度が最も低いエッジであると判別し、当該エッジに対して重要度1(最も低い重要度)を付与する。
Specifically, as shown in FIG. 3B, the edge present in the reduced image of 1/4 size which is the smallest scale image is determined to be the edge having the highest importance. Then, importance 3 (the highest importance) is assigned to the edge.
Next, among the edges existing in the reduced image of ½ size, the edges that are not present in the reduced image of ¼ size (edges not assigned importance 3) are The edge is determined to be an intermediate edge, and importance 2 (medium importance) is assigned to the edge.
Finally, of the edges existing in the original raster image, the edges that are not present in the reduced images of 1/2 and 1/4 size (edges that are not given importance 3 or 2) ) Is determined to be the edge with the lowest importance, and importance 1 (the lowest importance) is assigned to the edge.
 これにより、全ての縮尺の画像から検出されるエッジには、最も高い重要度3が付与され、原寸のラスタ画像だけから検出されるエッジには、最も低い重要度1が付与される。
 エッジ検出部1は、エッジ検出処理と重要度判別処理を実施すると、各縮尺の画像内に存在しているエッジを示すエッジ情報と、そのエッジに付与している重要度を示す重要度情報を領域分割部2に出力する。
Thereby, the highest importance level 3 is given to the edges detected from all the scale images, and the lowest importance level 1 is given to the edges detected only from the original size raster image.
When the edge detection unit 1 performs the edge detection process and the importance determination process, the edge detection unit 1 obtains edge information indicating an edge existing in each scale image and importance information indicating the importance assigned to the edge. The data is output to the area dividing unit 2.
 領域分割部2は、エッジ検出部1からエッジ情報と重要度情報を受けると、そのエッジ情報と重要度情報を参照して、ラスタ画像を複数の部分領域に分割したのち(ステップST3)、その部分領域を複数の部分領域に分割する処理を繰り返し実施する(ステップST4,ST5)。
 即ち、領域分割部2は、エッジ検出部1から出力された重要度情報を参照して、エッジ検出部1により検出されたエッジの中で、重要度3のエッジを特定し、重要度3のエッジを使用して、ラスタ画像を複数の部分領域に分割する(ステップST3)。
 次に、その重要度情報を参照して、重要度3より重要度が1つ低い重要度2のエッジを特定し、重要度2のエッジを使用して、上記部分画像(重要度3のエッジを使用して分割された部分画像)を複数の部分領域に分割する(ステップST4,ST5)。
 次に、その重要度情報を参照して、重要度2より重要度が1つ低い重要度1のエッジを特定し、重要度1のエッジを使用して、上記部分画像(重要度3,2のエッジを使用して分割された部分画像)を複数の部分領域に分割する(ステップST4,ST5)。
 なお、重要度1より重要度が低いエッジは存在しないので、ステップST4,ST5の繰り返し処理を終了する。
Upon receiving edge information and importance information from the edge detection unit 1, the area dividing unit 2 refers to the edge information and importance information, divides the raster image into a plurality of partial areas (step ST3), The process of dividing the partial area into a plurality of partial areas is repeatedly performed (steps ST4 and ST5).
That is, the region dividing unit 2 refers to the importance level information output from the edge detection unit 1 and identifies an edge of importance level 3 among the edges detected by the edge detection unit 1. The raster image is divided into a plurality of partial regions using the edges (step ST3).
Next, with reference to the importance level information, an importance level 2 edge that is one level lower than the importance level 3 is identified, and the above-mentioned partial image (the importance level 3 edge) is used by using the importance level 2 edge. Is divided into a plurality of partial areas (steps ST4 and ST5).
Next, with reference to the importance information, an edge of importance 1 that is one less important than importance 2 is specified, and the partial image (importance 3, 2) is used using the edge of importance 1. Are divided into a plurality of partial areas (steps ST4 and ST5).
Since there is no edge having a lower importance level than the importance level 1, the repetition process of steps ST4 and ST5 is terminated.
 以下、領域分割部2の処理内容を具体的に説明する。
 図4は領域分割部2による領域分割処理と部分領域の階層化処理を示す説明図である。
 まず、領域分割部2は、重要度が最も高い「重要度3のエッジ」を使用して(図4(A)を参照)、ラスタ画像の分割処理を実施することで、そのラスタ画像を複数の部分領域R1,R2,R3,R4に分割する(図4(B)を参照)。
Hereinafter, the processing content of the area dividing unit 2 will be specifically described.
FIG. 4 is an explanatory diagram showing the area dividing process and the partial area hierarchizing process by the area dividing unit 2.
First, the region dividing unit 2 uses a “edge of importance 3” having the highest importance (see FIG. 4A) to perform a raster image dividing process, so that a plurality of raster images are obtained. Are divided into partial regions R1, R2, R3, and R4 (see FIG. 4B).
 ここで、ラスタ画像の分割処理を具体的に説明する。
 図5は領域分割部2の領域分割方法を示す説明図である。
 図5に示すように、2つのエッジ(1)、エッジ(2)がラスタ画像内に存在する場合の領域分割を一例として説明する。
 最初に、領域分割部2は、図5(A)に示すように、画像内に存在しているエッジの変曲点、分岐点(交点を含む)、端点を求める。
 図5の例では、エッジ(1)の端点、エッジ(1)とエッジ(2)の交点、エッジ(2)の変曲点を求める。
Here, the raster image dividing process will be described in detail.
FIG. 5 is an explanatory diagram showing a region dividing method of the region dividing unit 2.
As shown in FIG. 5, an example of region division when two edges (1) and (2) exist in a raster image will be described.
First, as shown in FIG. 5A, the area dividing unit 2 obtains inflection points, branch points (including intersection points), and end points of edges existing in the image.
In the example of FIG. 5, the end point of the edge (1), the intersection of the edge (1) and the edge (2), and the inflection point of the edge (2) are obtained.
 次に、領域分割部2は、図5(B)に示すように、画像内に存在している水平又は垂直のエッジを通る領域分割線を求め、その領域分割線によって画像を分割する。
 図5の例では、水平のエッジであるエッジ(1)が画像内に存在しているので、エッジ(1)を通る領域分割線を求めている。図5の例では、垂直のエッジは画像内に存在していない。
Next, as shown in FIG. 5B, the area dividing unit 2 obtains area dividing lines passing through horizontal or vertical edges existing in the image, and divides the image by the area dividing lines.
In the example of FIG. 5, since the edge (1) which is a horizontal edge exists in the image, the area dividing line passing through the edge (1) is obtained. In the example of FIG. 5, no vertical edge exists in the image.
 次に、領域分割部2は、図5(C)に示すように、エッジの端点を通る水平又は垂直の領域分割線を求め、その領域分割線によって画像を分割する。
 図5の例では、エッジ(1)の端点が画像内に存在しているので、エッジ(1)の端点を通る垂直の領域分割線を求めている。エッジ(1)の端点を通る水平の領域分割線は、図5(B)の領域分割線と重なるので、ここでは求めない。
Next, as shown in FIG. 5C, the area dividing unit 2 obtains a horizontal or vertical area dividing line passing through the end points of the edges, and divides the image by the area dividing line.
In the example of FIG. 5, since the end point of the edge (1) exists in the image, a vertical area dividing line passing through the end point of the edge (1) is obtained. The horizontal area dividing line passing through the end point of the edge (1) overlaps with the area dividing line in FIG.
 次に、領域分割部2は、図5(D)に示すように、エッジの交点を通る水平又は垂直の領域分割線を求め、その領域分割線によって画像を分割する。
 図5の例では、エッジ(1)とエッジ(2)の交点が画像内に存在しているので、エッジ(1)とエッジ(2)の交点を通る垂直の領域分割線を求めている。エッジ(1)とエッジ(2)の交点を通る水平の領域分割線は、図5(B)の領域分割線と重なるので、ここでは求めない。
Next, as shown in FIG. 5D, the area dividing unit 2 obtains a horizontal or vertical area dividing line passing through the intersection of the edges, and divides the image by the area dividing line.
In the example of FIG. 5, since the intersection of the edge (1) and the edge (2) exists in the image, a vertical area dividing line passing through the intersection of the edge (1) and the edge (2) is obtained. The horizontal area dividing line passing through the intersection of the edge (1) and the edge (2) overlaps with the area dividing line in FIG.
 次に、領域分割部2は、図5(E)に示すように、エッジの変曲点を通る水平又は垂直の領域分割線を求め、その領域分割線によって画像を分割する。
 図5の例では、エッジ(2)の変曲点が画像内に存在しているので、エッジ(2)の変曲点を通る水平の領域分割線と、垂直の領域分割線とを求めている。
 領域分割部2は、分割後の画像である長方形の各部分領域内において、曲線エッジの数が1つになるまで、図5(B)~(E)の操作を繰り返し実施する。
 図5(B)~(E)の操作において、新規に領域分割線を増やすときに、既存の領域分割線を横切るようにしてもよいし、横切らないようにしてもよい。
Next, as shown in FIG. 5E, the region dividing unit 2 obtains a horizontal or vertical region dividing line passing through the inflection point of the edge, and divides the image by the region dividing line.
In the example of FIG. 5, since the inflection point of the edge (2) exists in the image, the horizontal area dividing line passing through the inflection point of the edge (2) and the vertical area dividing line are obtained. Yes.
The area dividing unit 2 repeatedly performs the operations shown in FIGS. 5B to 5E until the number of curved edges becomes one in each rectangular partial area that is an image after division.
In the operations shown in FIGS. 5B to 5E, when a new area dividing line is newly added, the existing area dividing line may be crossed or may not be crossed.
 最後に、領域分割部2は、図5(F)に示すように、部分領域に含まれている斜め線又は曲線などを2次曲線や3次曲線に近似することで近似曲線を求め、その近似曲線を示す関数の種類と、その関数のパラメータとを示す内部線情報を当該部分領域の領域情報(属性情報)の1つとする。
 図5(F)の例では、長方形の部分領域に含まれる斜め線や曲線の種類を限定していないが、曲線の種類を一定のパターンに限定して領域分割を行うようにしてもよい。
Finally, as shown in FIG. 5F, the region dividing unit 2 obtains an approximate curve by approximating a diagonal line or a curve included in the partial region to a quadratic curve or a cubic curve. The internal line information indicating the type of function indicating the approximate curve and the parameter of the function is set as one of the area information (attribute information) of the partial area.
In the example of FIG. 5F, the types of diagonal lines and curves included in the rectangular partial region are not limited, but the region division may be performed by limiting the types of curves to a certain pattern.
 図6は領域分割部2の別の領域分割方法を示す説明図である。
 図5に示すように、画像内に存在しているエッジの変曲点、分岐点(交点を含む)、端点を用いて、画像を複数の部分領域に分割してもよいが、図6(A)に示すような領域パターン(1)、領域パターン(2)、領域パターン(3)のいずれかを用いて、画像を複数の部分領域に分割するようにしてもよい。
FIG. 6 is an explanatory diagram showing another area dividing method of the area dividing unit 2.
As shown in FIG. 5, the image may be divided into a plurality of partial areas using inflection points, branch points (including intersections), and end points of edges existing in the image. The image may be divided into a plurality of partial regions using any one of the region pattern (1), the region pattern (2), and the region pattern (3) as shown in A).
 ここで、領域パターン(1)は分割線を含まない領域であり、領域パターン(2)は右上隅と左下隅を通る曲線又は直線で記述された分割線を含む領域である。
 また、領域パターン(3)は左上隅と右下隅を通る曲線又は直線で記述された分割線を含む領域である。
 領域分割部2は、例えば、図6(B)に示すように、2つのエッジ(1)、エッジ(2)が画像内に存在する場合、2つのエッジ(1)、エッジ(2)に合わせて、部分領域である小領域毎に、領域パターン(1)、領域パターン(2)、領域パターン(3)のいずれかを選択し(例えば、エッジ(1)(2)が存在していない小領域では、領域パターン(1)を選択し、エッジ(2)が存在している小領域では、曲線又は直線の方向に応じて、領域パターン(2)又は領域パターン(3)を選択する)、その選択した各小領域の領域パターンを組み合わせることで、図6(C)に示すような領域分割結果を得る。
 各部分領域を3つの領域パターンのいずれかで表現することで、各部分領域内の曲線の種類を限定することができるため、曲線のパラメータ数を削減することができる利点が得られる。
Here, the area pattern (1) is an area not including a dividing line, and the area pattern (2) is an area including a dividing line described by a curve or a straight line passing through the upper right corner and the lower left corner.
The area pattern (3) is an area including a dividing line described by a curve or a straight line passing through the upper left corner and the lower right corner.
For example, as shown in FIG. 6B, the region dividing unit 2 matches two edges (1) and edge (2) when two edges (1) and edge (2) exist in the image. Then, for each small region that is a partial region, one of the region pattern (1), the region pattern (2), and the region pattern (3) is selected (for example, a small region that does not have edges (1) and (2)). In the area, the area pattern (1) is selected, and in the small area where the edge (2) exists, the area pattern (2) or the area pattern (3) is selected according to the direction of the curve or the straight line). By combining the region patterns of the selected small regions, a region division result as shown in FIG. 6C is obtained.
By expressing each partial region with any one of the three region patterns, the types of curves in each partial region can be limited, so that the advantage of reducing the number of parameters of the curve can be obtained.
 領域分割部2は、重要度が最も高い「重要度3のエッジ」を使用して(図4(A)を参照)、ラスタ画像を部分領域R1,R2,R3,R4に分割すると、次に重要度が高い「重要度2のエッジ」を使用して(図4(A)を参照)、部分領域の分割処理を実施することで、その部分領域を複数の部分領域に分割する(図4(B)を参照)。
 図4の例では、重要度2のエッジが部分領域R4内に存在しているので、部分領域R4を部分領域R5,R6,R7に分割している。
The area dividing unit 2 uses the “importance 3 edge” having the highest importance (see FIG. 4A) to divide the raster image into partial areas R1, R2, R3, and R4. Using the “importance 2 edge” having a high importance (see FIG. 4A), the partial area is divided into a plurality of partial areas by performing the division process (see FIG. 4). (See (B)).
In the example of FIG. 4, since an edge of importance 2 exists in the partial region R4, the partial region R4 is divided into partial regions R5, R6, and R7.
 領域分割部2は、「重要度2のエッジ」を使用して(図4(A)を参照)、部分領域R4を部分領域R5,R6,R7に分割すると、重要度が最も低い「重要度1のエッジ」を使用して(図4(A)を参照)、部分領域の分割処理を実施することで、その部分領域を複数の部分領域に分割する(図4(B)を参照)。
 図4の例では、重要度1のエッジが部分領域R1,R2,R3を横切っているので、部分領域R1を部分領域R8と部分領域R11に分割し、部分領域R2を部分領域R9と部分領域R12に分割する。また、部分領域R3を部分領域R10と部分領域R13に分割する。
The region dividing unit 2 uses the “edge of importance 2” (see FIG. 4A) to divide the partial region R4 into the partial regions R5, R6, and R7, and the “importance” having the lowest importance is obtained. By using the edge of “1” (see FIG. 4A), the partial area is divided to be divided into a plurality of partial areas (see FIG. 4B).
In the example of FIG. 4, since the edge of importance 1 crosses the partial regions R1, R2, and R3, the partial region R1 is divided into the partial region R8 and the partial region R11, and the partial region R2 is divided into the partial region R9 and the partial region. Divide into R12. Further, the partial region R3 is divided into a partial region R10 and a partial region R13.
 領域分割部2は、領域分割処理が終わると、分割後の部分領域R1~R13の階層化処理を実施する(ステップST6)。
 図4の例では、重要度が最も高い「重要度3のエッジ」だけを使用して分割している部分領域R1,R2,R3,R4は階層(1)(最上位の階層)に属するものと定義して、階層(1)を示す階層番号を部分領域R1,R2,R3,R4に付与する。
 「重要度2のエッジ」を追加することで分割している部分領域R5,R6,R7は階層(2)(中位の階層)に属するものと定義して、階層(2)を示す階層番号を部分領域R5,R6,R7に付与する。
 「重要度1のエッジ」を追加することで分割している部分領域R8,R9,R10,R11,R12,R13は階層(3)(最下位の階層)に属するものと定義して、階層(3)を示す階層番号を部分領域R8,R9,R10,R11,R12,R13に付与する。
When the area dividing process is completed, the area dividing unit 2 performs a hierarchizing process on the divided partial areas R1 to R13 (step ST6).
In the example of FIG. 4, the partial areas R1, R2, R3, and R4 divided using only the “importance 3 edge” having the highest importance belong to the hierarchy (1) (the highest hierarchy). And the hierarchy number indicating the hierarchy (1) is assigned to the partial regions R1, R2, R3, and R4.
The subregions R5, R6, and R7 divided by adding the “edge of importance 2” are defined as belonging to the hierarchy (2) (middle hierarchy), and the hierarchy number indicating the hierarchy (2) Is applied to the partial regions R5, R6, and R7.
The partial areas R8, R9, R10, R11, R12, and R13 divided by adding the “edge of importance 1” are defined as belonging to the hierarchy (3) (the lowest hierarchy), and the hierarchy ( 3) is assigned to the partial areas R8, R9, R10, R11, R12, and R13.
 領域分割部2は、階層番号を部分領域R1~R13に付与すると、階層(1)~(3)に属している部分領域R1~R13の中で、対応関係がある部分領域同士のリンク付けを行う。
 例えば、部分領域R5,R6,R7は、部分領域R4から分割された領域であるため、部分領域R5,R6,R7と部分領域R4の間には対応関係があり、図4(C)に示すように、部分領域R4と部分領域R5,R6,R7のリンク付けを行う。
When the region dividing unit 2 assigns the layer numbers to the partial regions R1 to R13, the partial regions R1 to R13 belonging to the layers (1) to (3) are linked to each other between the corresponding partial regions. Do.
For example, since the partial regions R5, R6, and R7 are regions divided from the partial region R4, there is a correspondence between the partial regions R5, R6, and R7 and the partial region R4, as shown in FIG. Thus, the partial region R4 and the partial regions R5, R6, and R7 are linked.
 また、部分領域R8,R11は、部分領域R1から分割された領域であるため、部分領域R8,R11と部分領域R1の間には対応関係があり、図4(C)に示すように、部分領域R1と部分領域R8,R11のリンク付けを行う。
 同様に、部分領域R9,R12は、部分領域R2から分割された領域であるため、部分領域R9,R12と部分領域R2の間には対応関係があり、図4(C)に示すように、部分領域R2と部分領域R9,R12のリンク付けを行う。
 同様に、部分領域R10,R13は、部分領域R3から分割された領域であるため、部分領域R10,R13と部分領域R3の間には対応関係があり、図4(C)に示すように、部分領域R3と部分領域R10,R13のリンク付けを行う。
Further, since the partial areas R8, R11 are areas divided from the partial area R1, there is a correspondence between the partial areas R8, R11 and the partial area R1, and as shown in FIG. The region R1 and the partial regions R8 and R11 are linked.
Similarly, since the partial regions R9 and R12 are regions divided from the partial region R2, there is a correspondence between the partial regions R9 and R12 and the partial region R2, and as shown in FIG. The partial area R2 and the partial areas R9 and R12 are linked.
Similarly, since the partial regions R10 and R13 are regions divided from the partial region R3, there is a correspondence between the partial regions R10 and R13 and the partial region R3, and as shown in FIG. The partial area R3 and the partial areas R10 and R13 are linked.
 色近似部3は、領域分割部2による部分領域の階層化処理が終わると、領域分割部2により階層化された部分領域毎に、当該部分領域を構成している画素の色を示す画素値を連続関数で近似する。
 ここで、図7は色近似部3による部分領域を囲む辺等の属性設定処理及び画素値のサンプリング処理を示す説明図である。
 図8はサンプリング点の局所座標系(U,V)を示す説明図である。
 以下、図7及び図8を参照しながら、色近似部3の処理内容を具体的に説明する。
When the partial area hierarchization processing by the area dividing unit 2 is finished, the color approximating unit 3 displays, for each partial area hierarchized by the area dividing unit 2, a pixel value indicating the color of the pixel constituting the partial area Is approximated by a continuous function.
Here, FIG. 7 is an explanatory diagram showing attribute setting processing such as a side surrounding a partial region and sampling processing of pixel values by the color approximation unit 3.
FIG. 8 is an explanatory diagram showing a local coordinate system (U, V) of sampling points.
Hereinafter, the processing content of the color approximation unit 3 will be described in detail with reference to FIGS. 7 and 8.
 色近似部3は、領域分割部2により階層化された部分領域毎に、当該部分領域を囲む辺の属性を「連続辺」又は「不連続辺」に設定する(ステップST7)。
 即ち、色近似部3は、部分領域を囲む辺上にエッジが存在するか否かを判定し、エッジが存在する辺の属性を「不連続辺」に設定する。一方、エッジが存在しない辺の属性を「連続辺」に設定する。
 例えば、図4に示す部分領域R1に着目すると、部分領域R1と部分領域R2の境界にはエッジがあるため(図4(A)(B)を参照)、部分領域R1の右辺にはエッジが存在しており、当該辺の属性は「不連続辺」に設定される。
 一方、部分領域R1と部分領域R4の境界にはエッジがないため(図4(A)(B)を参照)、部分領域R1の下辺にはエッジが存在しておらず、当該辺の属性は「連続辺」に設定される。
For each partial region hierarchized by the region dividing unit 2, the color approximating unit 3 sets the attribute of the side surrounding the partial region to “continuous side” or “discontinuous side” (step ST7).
That is, the color approximating unit 3 determines whether or not an edge exists on the side surrounding the partial region, and sets the attribute of the side on which the edge exists to “discontinuous side”. On the other hand, the attribute of the side where no edge exists is set to “continuous side”.
For example, when attention is paid to the partial region R1 shown in FIG. 4, since there is an edge at the boundary between the partial region R1 and the partial region R2 (see FIGS. 4A and 4B), there is an edge on the right side of the partial region R1. Exists and the attribute of the side is set to “discontinuous side”.
On the other hand, since there is no edge at the boundary between the partial region R1 and the partial region R4 (see FIGS. 4A and 4B), no edge exists on the lower side of the partial region R1, and the attribute of the side is Set to “continuous edge”.
 ここでは、エッジの有無で部分領域を囲む辺の属性を設定しているが、部分領域を囲む辺付近の色の連続性に基づいて辺の属性を設定するようにしてもよい。
 即ち、色近似部3は、部分領域の内部の画素値(部分領域を構成している画素の色を示す画素値)と、その部分領域の周辺領域(隣の領域)の画素値との差を算出し、その画素値の差が所定値以上であれば、その部分領域を囲む辺の属性を「不連続辺」に設定する。
 一方、画素値の差が所定値未満であれば、その部分領域を囲む辺の属性を「連続辺」に設定する。
 なお、画素値の連続性の判定方法として、例えば、隣接する辺の周辺の画素値の集合のユークリッド距離を計算するようにしてもよい。
Here, the side attribute surrounding the partial area is set by the presence or absence of the edge, but the side attribute may be set based on the continuity of the color near the side surrounding the partial area.
That is, the color approximating unit 3 determines the difference between the pixel value inside the partial area (the pixel value indicating the color of the pixel constituting the partial area) and the pixel value in the peripheral area (adjacent area) of the partial area. If the difference between the pixel values is equal to or greater than a predetermined value, the attribute of the side surrounding the partial area is set to “discontinuous side”.
On the other hand, if the difference between the pixel values is less than the predetermined value, the attribute of the side surrounding the partial area is set to “continuous side”.
As a method for determining the continuity of pixel values, for example, the Euclidean distance of a set of pixel values around adjacent sides may be calculated.
 例えば、図7(A)に示す部分領域Z1(内部が赤色の領域)に着目すると、例えば、この部分領域Z1の上側の領域は黄色で、右側の領域は青色であり、部分領域Z1の赤色と異なる。このため、部分領域Z1の上辺及び右辺の属性は「不連続辺」に設定される。
 一方、部分領域Z1の下側及び左側の領域は同じ赤色であるため、部分領域Z1の下辺及び左辺の属性は「連続辺」に設定される。
 なお、色近似部3は、各部分領域に含まれている斜め線又は曲線についての属性は、常に「不連続辺」に設定する。
For example, when attention is paid to the partial area Z1 shown in FIG. 7A (inside the red area), for example, the upper area of the partial area Z1 is yellow, the right area is blue, and the red of the partial area Z1. And different. For this reason, the attribute of the upper side and the right side of the partial region Z1 is set to “discontinuous side”.
On the other hand, since the lower and left regions of the partial region Z1 are the same red, the attribute of the lower and left sides of the partial region Z1 is set to “continuous side”.
The color approximating unit 3 always sets the attribute of the diagonal line or curve included in each partial region to “discontinuous side”.
 色近似部3は、部分領域を囲む辺及び部分領域に含まれている斜め線又は曲線の属性を設定すると、その属性の設定状況に応じて、サンプリング対象の画素値を決定する(ステップST8)。
 例えば、図7(A)に示す部分領域Z1に着目すると、部分領域Z1の上辺と右辺の属性は「不連続辺」であり、下辺及び左辺の属性は「連続辺」である。
 また、部分領域Z1には、斜め線Lが含まれており、斜め線Lの属性は「不連続辺」である。
 このため、部分領域Z1は斜め線Lによって、上部と下部に分割されている。
When the color approximating unit 3 sets the attribute of the oblique line or curve included in the partial region and the sides surrounding the partial region, the color approximating unit 3 determines the pixel value to be sampled according to the setting state of the attribute (step ST8). .
For example, focusing on the partial area Z1 shown in FIG. 7A, the attributes of the upper and right sides of the partial area Z1 are “discontinuous sides”, and the attributes of the lower and left sides are “continuous sides”.
The partial area Z1 includes a diagonal line L, and the attribute of the diagonal line L is “discontinuous side”.
For this reason, the partial region Z1 is divided into an upper part and a lower part by an oblique line L.
 部分領域Z1の上部は、「不連続辺」の属性を有する線で囲まれているため、上部を構成している画素の色を示す画素値を連続関数で近似する際、部分領域Z1の下部や、部分領域Z1の上側・右側の領域内の色と混じらないようにすることが望ましい。このため、上部を構成している画素の画素値だけをサンプリング対象の画素値に決定する。具体的には、図7(B)の「●」で示す位置の画素値をサンプリング対象に決定する。 Since the upper part of the partial area Z1 is surrounded by a line having the attribute of “discontinuous side”, when approximating the pixel value indicating the color of the pixels constituting the upper part with a continuous function, the lower part of the partial area Z1 In addition, it is desirable not to mix with the colors in the upper and right areas of the partial area Z1. For this reason, only the pixel value of the pixel constituting the upper part is determined as the pixel value to be sampled. Specifically, the pixel value at the position indicated by “●” in FIG. 7B is determined as a sampling target.
 部分領域Z1の下部は、下辺及び左辺の属性が「連続辺」であるため、下部を構成している画素の色を示す画素値を連続関数で近似する際、部分領域Z1の下側・左側の領域内の色との間で滑らかに変化することが望ましい。このため、下部を構成している画素の画素値と部分領域Z1の下側・左側の領域内の一部の画素値をサンプリング対象の画素値に決定する。具体的には、図7(B)の「○」で示す位置の画素値をサンプリング対象に決定する。 Since the attribute of the lower side and the left side of the lower part of the partial area Z1 is “continuous side”, when the pixel value indicating the color of the pixels constituting the lower part is approximated by a continuous function, the lower and left sides of the partial area Z1 It is desirable to change smoothly between colors in the region. For this reason, the pixel values of the pixels constituting the lower part and the partial pixel values in the lower and left areas of the partial area Z1 are determined as sampling target pixel values. Specifically, the pixel value at the position indicated by “◯” in FIG. 7B is determined as a sampling target.
 色近似部3は、領域分割部2により階層化された部分領域毎に、サンプリング対象の画素値を決定すると、サンプリング対象の画素値をサンプリングし、その画素値を連続的な関数で近似する(ステップST9)。
 例えば、図8に示すように、ある一つの領域内において、点1を原点(0,0)とする面の局所座標系(U,V)を定義する。uとvは“0”から“1”の間をとる実数パラメータとし、点2の座標は(u=1,v=0)、点3の座標は(u=1,v=1)、点4の座標は(u=0,v=1)と定義する。
 このとき、色近似部3によりサンプリングされた点(u,v)の色や輝度情報は、任意のパラメトリックな関数F(u,v)により近似することができる。
When the color approximation unit 3 determines the pixel value to be sampled for each partial region hierarchized by the region dividing unit 2, the color approximation unit 3 samples the pixel value to be sampled and approximates the pixel value with a continuous function ( Step ST9).
For example, as shown in FIG. 8, a local coordinate system (U, V) of a surface having point 1 as the origin (0, 0) is defined within a certain region. u and v are real parameters between “0” and “1”, the coordinates of point 2 are (u = 1, v = 0), the coordinates of point 3 are (u = 1, v = 1), The coordinates of 4 are defined as (u = 0, v = 1).
At this time, the color and luminance information of the point (u, v) sampled by the color approximating unit 3 can be approximated by an arbitrary parametric function F (u, v).
 パラメトリックな関数F(u,v)として、ベジェ曲面や、上記の非特許文献1に記載されている「Ferguson patch」を利用することができる。
 あるいは、急峻なエッジを表現することが可能なシグモイド関数F(u,v)を利用することもできる。
  F(u,v)=1/(1+exp(a×u+b×v+c))
 ここで、a,b,cは定数であり、ユーザが任意に指定してもよい。
As the parametric function F (u, v), a Bezier curved surface or “Ferguson patch” described in Non-Patent Document 1 can be used.
Alternatively, a sigmoid function F (u, v) that can express a steep edge can be used.
F (u, v) = 1 / (1 + exp (a × u + b × v + c))
Here, a, b, and c are constants and may be arbitrarily specified by the user.
 また、パラメトリックな関数F(u,v)として、基底関数の組み合わせを用いて、画素値を近似するようにしてもよい。
 即ち、N個の基底関数f(u,v)(i=1,2,・・・,N)を用いて、関数F(u,v)を構成してもよい。
  F(u,v)=Σi f(u,v)
 なお、基底関数は、例えば、「Radial basis function」などを用いることができる。
Alternatively, the pixel value may be approximated using a combination of basis functions as the parametric function F (u, v).
That is, the function F (u, v) may be configured using N basis functions f i (u, v) (i = 1, 2,..., N).
F (u, v) = Σi f i (u, v)
For example, “Radial basis function” can be used as the basis function.
 画像ベクトル化装置がステップST1~ST9の処理を実行することにより、ベクトル画像データが生成されて、そのベクトル画像データがベクトル画像データ格納部4に格納される。
 図9はベクトル画像データの構成例を示す説明図である。
 ベクトル画像データは、図9に示すように、領域情報と領域アクセステーブルから構成され、領域情報は各階層に属している部分領域毎に生成され、領域アクセステーブルは階層毎に生成される。
 例えば、M個の階層(1)~(M)がある場合、M個の領域アクセステーブルが生成される。
 領域情報については、例えば、図4に示すように、階層(1)に属する部分領域が4個、階層(2)に属する部分領域が3個、階層(3)に属する部分領域が6個である場合、階層(1)において4個の領域情報、階層(2)において3個の領域情報、階層(3)において6個の領域情報が生成される。
When the image vectorization apparatus executes the processes of steps ST1 to ST9, vector image data is generated, and the vector image data is stored in the vector image data storage unit 4.
FIG. 9 is an explanatory diagram showing a configuration example of vector image data.
As shown in FIG. 9, the vector image data includes area information and an area access table. The area information is generated for each partial area belonging to each hierarchy, and the area access table is generated for each hierarchy.
For example, when there are M layers (1) to (M), M area access tables are generated.
As for the area information, for example, as shown in FIG. 4, there are 4 partial areas belonging to the hierarchy (1), 3 partial areas belonging to the hierarchy (2), and 6 partial areas belonging to the hierarchy (3). In some cases, four area information is generated in the hierarchy (1), three area information is generated in the hierarchy (2), and six area information is generated in the hierarchy (3).
 ここで、領域情報は、境界線情報、境界線属性情報、色情報、リンク領域情報及び内部線情報から構成されている情報である。
 境界線情報は、領域分割部2により生成される座標情報であり、この境界線情報は、境界線(部分領域を囲む辺、部分領域に含まれている斜め線又は曲線)が存在している位置を示す情報である。
 境界線属性情報は、領域分割部2により生成される属性情報であり、この境界線属性情報は、境界線の属性を示す情報である。例えば、部分領域を囲む辺や曲線等が「連続辺」であるのか「不連続辺」であるのかを示している。
Here, the area information is information composed of boundary line information, boundary line attribute information, color information, link area information, and internal line information.
The boundary line information is coordinate information generated by the area dividing unit 2, and this boundary line information includes a boundary line (a side surrounding the partial area, an oblique line or a curve included in the partial area). This is information indicating the position.
The boundary line attribute information is attribute information generated by the region dividing unit 2, and the boundary line attribute information is information indicating the attribute of the boundary line. For example, it indicates whether a side, a curve, or the like surrounding the partial region is a “continuous side” or a “discontinuous side”.
 色情報は、色近似部3により生成される色に関する情報であり、この色情報は、部分領域の画素値を近似している連続関数の種類と、その連続関数のパラメータとを示す情報である。
 リンク領域情報は、領域分割部2により生成される部分領域の階層関係を示す情報であり、このリンク領域情報は、ある部分領域とリンクしている上位階層又は下位階層の部分領域を識別する領域番号や、各部分領域が属している階層を示す階層番号などを示す情報である。
 内部線情報は、領域分割部2により生成される内部線に関する情報であり、この内部線情報は、部分領域に含まれている斜め線又は曲線の近似曲線を示す関数の種類と、その関数のパラメータとを示す情報である。
The color information is information relating to the color generated by the color approximating unit 3, and this color information is information indicating the type of continuous function approximating the pixel value of the partial area and the parameters of the continuous function. .
The link area information is information indicating the hierarchical relationship of the partial areas generated by the area dividing unit 2, and this link area information is an area for identifying a partial area of an upper hierarchy or a lower hierarchy linked to a certain partial area. This is information indicating a number, a hierarchy number indicating a hierarchy to which each partial area belongs, and the like.
The internal line information is information related to the internal line generated by the area dividing unit 2, and the internal line information includes the type of function indicating the oblique line or the approximate curve of the curve included in the partial area, and the function Information indicating parameters.
 領域アクセステーブルは、領域分割部2により生成されるルックアップテーブルであり、この領域アクセステーブルは、ラスタ画像の画像座標と階層化している部分領域の対応関係を示すものである。
 後述するように、ある視点の画像等を描画する際に、ある視点を示す画像座標が与えられた場合、領域アクセステーブルを参照することで、その画像座標を含む部分領域を容易に特定することができるようになる。
 図10は領域アクセステーブルの構成例を示す説明図である。
 以下、外部装置(例えば、カーナビゲーション装置)が、ある視点の画像等を描画する際に、領域アクセステーブルを参照することで、ベクトル画像データ格納部4により格納されているベクトル画像データにランダムアクセスして、画素値を取得する方法について説明する。
The area access table is a look-up table generated by the area dividing unit 2, and this area access table shows the correspondence between the image coordinates of the raster image and the partial areas hierarchized.
As will be described later, when an image coordinate indicating a certain viewpoint is given when drawing an image or the like of a certain viewpoint, a partial area including the image coordinates can be easily specified by referring to the area access table. Will be able to.
FIG. 10 is an explanatory diagram of a configuration example of the area access table.
Hereinafter, when an external device (for example, a car navigation device) draws an image of a certain viewpoint, the vector image data stored in the vector image data storage unit 4 is randomly accessed by referring to the area access table. A method for acquiring the pixel value will be described.
 ここでは説明の便宜上、図10(A)に示すように、9×9ピクセルのラスタ画像が部分領域R1,R2,R3,R4に分割されているものとする。
 このとき、元のラスタ画像に含まれる各部分領域の比率を保つ最小サイズのテーブル(領域アクセステーブル)が定義され、最小サイズのテーブルには、当該部分領域を示す領域番号が代入される。
 図10(B)の例では、各部分領域の比率を保つ最小サイズは、3×3ピクセル(元のラスタ画像が1/3に縮小された画像のサイズ)であり、領域アクセステーブルには領域番号1~4が付与されている。
Here, for convenience of explanation, it is assumed that a 9 × 9 pixel raster image is divided into partial regions R1, R2, R3, and R4 as shown in FIG.
At this time, a minimum size table (region access table) that maintains the ratio of each partial region included in the original raster image is defined, and a region number indicating the partial region is substituted into the minimum size table.
In the example of FIG. 10B, the minimum size for maintaining the ratio of each partial area is 3 × 3 pixels (the size of the original raster image reduced to 1/3). Numbers 1 to 4 are assigned.
 例えば、図10(A)のラスタ画像において、画像座標(6,5)に対応する部分領域を特定する場合を考える。
 この場合、外部装置は、ラスタ画像に対する領域アクセステーブルの比率である1/3を画像座標(6,5)にかけて四捨五入し、その四捨五入値(2,2)を算出する。
 外部装置は、四捨五入値(2,2)を算出すると、その四捨五入値(2,2)を領域アクセステーブルの座標値として、図10(B)に示す領域アクセステーブルから座標(2,2)に付与されている領域番号“4”を取得する。
 外部装置は、領域番号“4”を取得すると、ベクトル画像データ格納部4により格納されているベクトル画像データの中から、領域番号“4”の部分領域における領域情報(例えば、色情報、内部線情報など)を読み出し、その領域情報にしたがって当該部分領域(画像座標(6,5)を含むラスタ画像の一部分)を構成している各画素の色を決定して描画する。
For example, consider a case where a partial region corresponding to the image coordinates (6, 5) is specified in the raster image of FIG.
In this case, the external device rounds the image coordinates (6, 5) to 1/3, which is the ratio of the area access table to the raster image, and calculates the rounded value (2, 2).
When the external device calculates the rounded value (2, 2), the rounded value (2, 2) is used as the coordinate value of the area access table, and the coordinates are set to the coordinates (2, 2) from the area access table shown in FIG. The assigned area number “4” is acquired.
When the external device acquires the region number “4”, the region information (for example, color information, internal line) in the partial region of the region number “4” from the vector image data stored in the vector image data storage unit 4 is acquired. Information, etc.) is read out, and the color of each pixel constituting the partial area (a part of the raster image including the image coordinates (6, 5)) is determined and drawn according to the area information.
 以上で明らかなように、この実施の形態1によれば、ラスタ画像内に存在しているエッジを検出するとともに、そのラスタ画像における上記エッジの重要度を判別するエッジ検出部1と、エッジ検出部1により検出されたエッジの中で、エッジ検出部1により判別された重要度が最も高いエッジを使用して、そのラスタ画像を複数の部分領域に分割するとともに、そのエッジより重要度が低いエッジを順番に使用して、その部分領域を複数の部分領域に分割する処理を繰り返すことで、分割後の部分領域を階層化する領域分割部2とを設け、色近似部3が領域分割部2により階層化された部分領域毎に、当該部分領域を構成している画素の色を示す画素値を連続関数で近似するように構成したので、縮尺や視点の切り替えなどに応じて画像データの使用データ量を動的に変更することができるとともに、画像データに対してランダムアクセスすることができる効果を奏する。 As apparent from the above, according to the first embodiment, the edge detection unit 1 that detects an edge existing in a raster image and determines the importance of the edge in the raster image, and edge detection Among the edges detected by the section 1, the edge having the highest importance determined by the edge detection section 1 is used to divide the raster image into a plurality of partial areas, and the importance is lower than that edge. By using the edges in order and repeating the process of dividing the partial area into a plurality of partial areas, an area dividing unit 2 for hierarchizing the divided partial areas is provided, and the color approximating unit 3 is an area dividing unit For each partial area hierarchized by 2, the pixel value indicating the color of the pixels constituting the partial area is approximated by a continuous function. It is possible to dynamically change the used data amount of over data, an effect that may be random access to image data.
 以下、この実施の形態1による効果を具体的に説明する。
 まず、画像ベクトル化装置により生成されるベクトル画像データを構成する領域情報が階層化されている部分領域毎に生成されるので、外部装置(例えば、カーナビゲーション装置)が、ある縮尺や視点で地図等を描画する際、描画に必要な領域情報を動的に変更することができる。
Hereinafter, the effect by this Embodiment 1 is demonstrated concretely.
First, since the region information constituting the vector image data generated by the image vectorization device is generated for each partial region that is hierarchized, an external device (for example, a car navigation device) can map at a certain scale or viewpoint. For example, area information necessary for drawing can be dynamically changed.
 例えば、カーナビゲーション装置の3次元地図において、建物のベクトル画像を描画する場合を考える。
 3次元地図を俯瞰するような場合には、描画視点が対象物体から離れるため、図11に示すように、低い階層の領域情報(例えば、階層(1)の領域情報)だけを利用して、荒い画像を表示する。この際、カーナビゲーション装置のメモリ上には、低い階層の領域情報だけを読み込めば足りる。
 一方、3次元地図において、描画視点が対象建築物に接近した場合には、高い階層の領域情報(例えば、階層(1)~(3)の領域情報)も利用して、画像の詳細を再現する。
For example, consider a case where a vector image of a building is drawn on a three-dimensional map of a car navigation device.
When a bird's-eye view of a 3D map is used, the drawing viewpoint is away from the target object, so as shown in FIG. 11, using only lower layer region information (for example, region information of layer (1)), Display a rough image. At this time, it is sufficient to read only low-level area information on the memory of the car navigation apparatus.
On the other hand, when the drawing viewpoint approaches the target building on the 3D map, the details of the image are reproduced using also the high-level region information (for example, the region information of the layers (1) to (3)). To do.
 これにより、描画する画像に合わせて必要な領域情報だけを読み込めば足りるため、カーナビゲーション装置において、利用可能なメモリ量が少ない場合(一般的に、組み込み型のハードウェアでは、利用可能なメモリ量が制限されることが多い)でも、所望の画像を描画することが可能になる。
 また、ベクトル画像データを構成している領域アクセステーブルを利用すれば、ベクトル画像データに対するランダムアクセスが容易になり、ベクトル画像の全体を描画することなく、指定座標の画素値を部分的に取得することができる。
As a result, only the necessary area information needs to be read in accordance with the image to be drawn. Therefore, when the amount of memory available in the car navigation device is small (generally, the amount of memory available for embedded hardware) However, it is possible to draw a desired image.
In addition, if an area access table constituting vector image data is used, random access to the vector image data is facilitated, and the pixel value of the designated coordinate is partially acquired without drawing the entire vector image. be able to.
 なお、本願発明はその発明の範囲内において、実施の形態の任意の構成要素の変形、もしくは実施の形態の任意の構成要素の省略が可能である。 In the present invention, any component of the embodiment can be modified or any component of the embodiment can be omitted within the scope of the invention.
 この発明は、利用可能なメモリ量が少ないカーナビゲーション装置等でも、所望の画像の描画を可能にする必要がある画像ベクトル化装置に適している。 The present invention is suitable for an image vectorization apparatus that needs to be able to draw a desired image even in a car navigation apparatus that has a small amount of available memory.
 1 エッジ検出部(エッジ検出手段)、2 領域分割部(領域分割手段)、3 色近似部(色近似手段)、4 ベクトル画像データ格納部。 1 edge detection unit (edge detection means), 2 region division unit (region division unit), 3 color approximation unit (color approximation unit), 4 vector image data storage unit.

Claims (11)

  1.  ラスタ画像内に存在しているエッジを検出するとともに、上記ラスタ画像における上記エッジの重要度を判別するエッジ検出手段と、上記エッジ検出手段により検出されたエッジの中で、上記エッジ検出手段により判別された重要度が最も高いエッジを使用して、上記ラスタ画像を複数の部分領域に分割するとともに、上記エッジより重要度が低いエッジを順番に使用して、上記部分領域を複数の部分領域に分割する処理を繰り返すことで、分割後の部分領域を階層化する領域分割手段と、上記領域分割手段により階層化された部分領域毎に、当該部分領域を構成している画素の色を示す画素値を連続関数で近似する色近似手段とを備えた画像ベクトル化装置。 Edge detection means for detecting an edge present in the raster image and determining the importance of the edge in the raster image, and among the edges detected by the edge detection means, the edge detection means The raster image is divided into a plurality of partial areas by using the edge having the highest importance, and the partial areas are divided into a plurality of partial areas by sequentially using edges having a lower importance than the edges. Area division means for hierarchizing the divided partial area by repeating the division process, and a pixel indicating the color of the pixel constituting the partial area for each partial area hierarchized by the area division means An image vectorization apparatus comprising color approximation means for approximating a value with a continuous function.
  2.  エッジ検出手段は、ラスタ画像を複数の縮尺に縮小して、複数の縮尺の画像内に存在しているエッジを検出し、全ての縮尺の画像から検出されるエッジについては重要度が最も高いエッジであると判別し、一部の縮尺の画像からだけ検出されるエッジについては、検出される縮尺の画像が少ないエッジほど重要度が低いエッジであると判別することを特徴とする請求項1記載の画像ベクトル化装置。 The edge detection means reduces the raster image to a plurality of scales to detect edges existing in the images of a plurality of scales, and the edge detected from all the scale images has the highest importance. The edge detected only from a part of the scale images is determined to be an edge having a lower importance as an edge having a smaller scale image is detected. Image vectorization device.
  3.  領域分割手段は、長方形の部分領域に分割してから、上記部分領域に含まれている斜め線又は曲線の近似曲線を求めることを特徴とする請求項1記載の画像ベクトル化装置。 2. The image vectorization apparatus according to claim 1, wherein the area dividing means obtains an approximated curve of a diagonal line or a curve included in the partial area after dividing the rectangular partial area.
  4.  領域分割手段は、分割後の部分領域が属している階層を示す階層番号を当該部分領域に付与するとともに、異なる階層に属している部分領域の中で、対応関係がある部分領域同士のリンク付けを行うことを特徴とする請求項1記載の画像ベクトル化装置。 The area dividing means assigns a hierarchy number indicating the hierarchy to which the divided partial area belongs to the partial area, and links the partial areas having a corresponding relationship among the partial areas belonging to different hierarchies. The image vectorization apparatus according to claim 1, wherein:
  5.  領域分割手段は、ラスタ画像の画像座標と階層化している部分領域の対応関係を示す領域アクセステーブルを生成することを特徴とする請求項1記載の画像ベクトル化装置。 2. The image vectorization apparatus according to claim 1, wherein the area dividing means generates an area access table indicating a correspondence relationship between the image coordinates of the raster image and the partial areas hierarchized.
  6.  色近似手段は、領域分割手段により分割された部分領域を囲む辺の属性を連続辺又は不連続辺に設定するとともに、上記部分領域に含まれている斜め線又は曲線の属性を不連続辺に設定することを特徴とする請求項1記載の画像ベクトル化装置。 The color approximating means sets the attribute of the side surrounding the partial area divided by the area dividing means to the continuous side or the discontinuous side, and sets the attribute of the diagonal line or the curve included in the partial area to the discontinuous side. The image vectorization apparatus according to claim 1, wherein the image vectorization apparatus is set.
  7.  色近似手段は、領域分割手段により分割された部分領域を囲む辺上にエッジが存在するか否かを判定し、エッジが存在する辺の属性を不連続辺に設定する一方、エッジが存在しない辺の属性を連続辺に設定することを特徴とする請求項6記載の画像ベクトル化装置。 The color approximating means determines whether or not an edge exists on the side surrounding the partial area divided by the area dividing means, and sets the attribute of the side where the edge exists to a discontinuous side, while the edge does not exist The image vectorization apparatus according to claim 6, wherein the edge attribute is set to a continuous edge.
  8.  色近似手段は、領域分割手段により分割された部分領域の内部の画素値と、上記部分領域の周辺領域の画素値とを差を算出し、上記画素値の差が所定値以上であれば、上記部分領域を囲む辺の属性を不連続辺に設定する一方、上記画素値の差が所定値未満であれば、上記部分領域を囲む辺の属性を連続辺に設定することを特徴とする請求項6記載の画像ベクトル化装置。 The color approximation means calculates a difference between the pixel value inside the partial area divided by the area dividing means and the pixel value of the peripheral area of the partial area, and if the difference between the pixel values is a predetermined value or more, The attribute of the side surrounding the partial region is set to a discontinuous side, while the attribute of the side surrounding the partial region is set to a continuous side if the difference between the pixel values is less than a predetermined value. Item 7. The image vectorization apparatus according to Item 6.
  9.  色近似手段は、不連続辺で囲まれている領域については、当該領域内の画素値をサンプリングして、上記画素値を連続関数で近似し、連続辺で囲まれている領域、あるいは、不連続辺に設定している斜め線又は曲線と連続辺で囲まれている領域については、当該領域内の画素値と隣の領域内の一部の画素値をサンプリングし、上記画素値を連続関数で近似することを特徴とする請求項6記載の画像ベクトル化装置。 For the area surrounded by discontinuous sides, the color approximating means samples the pixel values in the area and approximates the pixel values with a continuous function, and the area surrounded by the continuous sides or For a region surrounded by diagonal lines or curves set to continuous sides and continuous sides, the pixel values in the region and a part of the pixel values in the adjacent region are sampled, and the pixel values are converted into a continuous function. The image vectorization apparatus according to claim 6, characterized by:
  10.  エッジ検出手段がラスタ画像内に存在しているエッジを検出するとともに、上記ラスタ画像における上記エッジの重要度を判別するエッジ検出処理ステップと、領域分割手段が、上記エッジ検出処理ステップで検出されたエッジの中で、上記エッジ検出処理ステップで判別された重要度が最も高いエッジを使用して、上記ラスタ画像を複数の部分領域に分割するとともに、上記エッジより重要度が低いエッジを順番に使用して、上記部分領域を複数の部分領域に分割する処理を繰り返すことで、分割後の部分領域を階層化する領域分割処理ステップと、色近似手段が上記領域分割処理ステップで階層化された部分領域毎に、当該部分領域を構成している画素の色を示す画素値を連続関数で近似する色近似処理ステップとを備えた画像ベクトル化方法。 The edge detection means detects edges existing in the raster image, and the edge detection processing step for determining the importance of the edge in the raster image, and the area dividing means are detected in the edge detection processing step. Among the edges, the edge having the highest importance determined in the edge detection processing step is used to divide the raster image into a plurality of partial areas, and the edges having lower importance than the edges are used in order. Then, by repeating the process of dividing the partial area into a plurality of partial areas, an area division processing step for hierarchizing the divided partial areas, and a part in which the color approximation means is hierarchized in the area division processing step A color approximation processing step for approximating a pixel value indicating a color of a pixel constituting the partial area by a continuous function for each area. Method of.
  11.  ラスタ画像内に存在しているエッジを検出するとともに、上記ラスタ画像における上記エッジの重要度を判別するエッジ検出処理手順と、上記エッジ検出処理手順で検出されたエッジの中で、上記エッジ検出処理手順で判別された重要度が最も高いエッジを使用して、上記ラスタ画像を複数の部分領域に分割するとともに、上記エッジより重要度が低いエッジを順番に使用して、上記部分領域を複数の部分領域に分割する処理を繰り返すことで、分割後の部分領域を階層化する領域分割処理手順と、上記領域分割処理手順で階層化された部分領域毎に、当該部分領域を構成している画素の色を示す画素値を連続関数で近似する色近似処理手順とをコンピュータに実行させるための画像ベクトル化プログラム。 An edge detection processing procedure for detecting an edge existing in the raster image and determining the importance of the edge in the raster image, and the edge detection processing among the edges detected in the edge detection processing procedure The raster image is divided into a plurality of partial areas using the edge having the highest importance determined in the procedure, and the partial areas are divided into a plurality of parts by sequentially using edges having a lower importance than the edges. Pixels constituting the partial area for each partial area hierarchized by the area division processing procedure for hierarchizing the divided partial area by repeating the process of dividing into partial areas An image vectorization program for causing a computer to execute a color approximation processing procedure for approximating a pixel value indicating a color of a color with a continuous function.
PCT/JP2011/001106 2011-02-25 2011-02-25 Image vectorization device, image vectorization method, and image vectorization program WO2012114386A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2011/001106 WO2012114386A1 (en) 2011-02-25 2011-02-25 Image vectorization device, image vectorization method, and image vectorization program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2011/001106 WO2012114386A1 (en) 2011-02-25 2011-02-25 Image vectorization device, image vectorization method, and image vectorization program

Publications (1)

Publication Number Publication Date
WO2012114386A1 true WO2012114386A1 (en) 2012-08-30

Family

ID=46720209

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/001106 WO2012114386A1 (en) 2011-02-25 2011-02-25 Image vectorization device, image vectorization method, and image vectorization program

Country Status (1)

Country Link
WO (1) WO2012114386A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112570A (en) * 2021-05-12 2021-07-13 北京邮电大学 Vectorization effect evaluation method based on perception drive

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07107294A (en) * 1993-09-30 1995-04-21 Toshiba Corp Device for encoding picture
JPH0927966A (en) * 1995-07-12 1997-01-28 Sanyo Electric Co Ltd Image coding method and image coder
JPH09200750A (en) * 1996-11-08 1997-07-31 Sony Corp Data transmitting method
JP2004023370A (en) * 2002-06-14 2004-01-22 Ikegami Tsushinki Co Ltd Method and device for encoding image
JP2008147880A (en) * 2006-12-07 2008-06-26 Nippon Telegr & Teleph Corp <Ntt> Image compression apparatus and method, and its program
JP2009111649A (en) * 2007-10-29 2009-05-21 Sony Corp Information encoding apparatus and method, information retrieval apparatus and method, information retrieval system and method, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07107294A (en) * 1993-09-30 1995-04-21 Toshiba Corp Device for encoding picture
JPH0927966A (en) * 1995-07-12 1997-01-28 Sanyo Electric Co Ltd Image coding method and image coder
JPH09200750A (en) * 1996-11-08 1997-07-31 Sony Corp Data transmitting method
JP2004023370A (en) * 2002-06-14 2004-01-22 Ikegami Tsushinki Co Ltd Method and device for encoding image
JP2008147880A (en) * 2006-12-07 2008-06-26 Nippon Telegr & Teleph Corp <Ntt> Image compression apparatus and method, and its program
JP2009111649A (en) * 2007-10-29 2009-05-21 Sony Corp Information encoding apparatus and method, information retrieval apparatus and method, information retrieval system and method, and program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TORU MIYAKOSHI ET AL.: "A Segmentation Method for Real Images using Quadratic Curved Line Units", IEICE TECHNICAL REPORT, vol. 103, no. 642, 26 January 2004 (2004-01-26), pages 13 - 18 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112570A (en) * 2021-05-12 2021-07-13 北京邮电大学 Vectorization effect evaluation method based on perception drive
CN113112570B (en) * 2021-05-12 2022-05-20 北京邮电大学 Vectorization effect evaluation method based on perception drive

Similar Documents

Publication Publication Date Title
US11922534B2 (en) Tile based computer graphics
JP7004759B2 (en) Varying the effective resolution depending on the position of the screen by changing the active color sample count within multiple render targets
JP6678209B2 (en) Gradient adjustment for texture mapping to non-orthonormal grid
JP6563048B2 (en) Tilt adjustment of texture mapping for multiple rendering targets with different resolutions depending on screen position
TWI578266B (en) Varying effective resolution by screen location in graphics processing by approximating projection of vertices onto curved viewport
TWI584223B (en) Method and system of graphics processing enhancement by tracking object and/or primitive identifiers,graphics processing unit and non-transitory computer readable medium
US7884825B2 (en) Drawing method, image generating device, and electronic information apparatus
TW201539374A (en) Method for efficient construction of high resolution display buffers
JP2005100177A (en) Image processor and its method
EP4094231A1 (en) Mesh optimization for computer graphics
US9721187B2 (en) System, method, and computer program product for a stereoscopic image lasso
US20150015574A1 (en) System, method, and computer program product for optimizing a three-dimensional texture workflow
RU2680355C1 (en) Method and system of removing invisible surfaces of a three-dimensional scene
US10347034B2 (en) Out-of-core point rendering with dynamic shapes
WO2012114386A1 (en) Image vectorization device, image vectorization method, and image vectorization program
US10062191B2 (en) System and method for rendering points without gaps
US11869123B2 (en) Anti-aliasing two-dimensional vector graphics using a compressed vertex buffer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11859616

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11859616

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP