JPH08263647A - Area division device - Google Patents

Area division device

Info

Publication number
JPH08263647A
JPH08263647A JP7063653A JP6365395A JPH08263647A JP H08263647 A JPH08263647 A JP H08263647A JP 7063653 A JP7063653 A JP 7063653A JP 6365395 A JP6365395 A JP 6365395A JP H08263647 A JPH08263647 A JP H08263647A
Authority
JP
Japan
Prior art keywords
area
image
division
strength
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP7063653A
Other languages
Japanese (ja)
Other versions
JP3055423B2 (en
Inventor
Atsushi Kasao
敦司 笠尾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujifilm Business Innovation Corp
Original Assignee
Fuji Xerox Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuji Xerox Co Ltd filed Critical Fuji Xerox Co Ltd
Priority to JP7063653A priority Critical patent/JP3055423B2/en
Publication of JPH08263647A publication Critical patent/JPH08263647A/en
Application granted granted Critical
Publication of JP3055423B2 publication Critical patent/JP3055423B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Editing Of Facsimile Originals (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Color Image Communication Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

PURPOSE: To provide an area division device which can appropriately divide an area in accordance with the strength of the texture of a picture. CONSTITUTION: The area division device 1 changes the condition of noise removal in the case of dividing the area in accordance with the value of the strength of a texture element in quantized picture in full colors by a noise removal strength decision means 51. A deformation strength decision means 52 changes the magnitude of the deformation of the area in the case of dividing the area in accordance with the value of the strength of the texture element. A contribution rate change means 53 changes the contribution rate of prescribed color space and xy-coordinates, which appear in the condition expression of division in the case of dividing the area, in accordance with the value of the strength of the texture element.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は、主として画像の前処理
として、量子化されたフルカラー画像データの絵柄に応
じた領域分割を行う領域分割装置に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an area dividing device for performing area division according to a pattern of quantized full-color image data, mainly as image preprocessing.

【0002】[0002]

【従来の技術】読み込まれた画像を適切な領域に分割す
る処理は画像処理の様々な分野で主として前処理として
利用されている大切な技術である。例えば、画像編集で
は編集を行うべき領域を切り出す必要があり、その際に
画像の領域分割が利用される。また、認識処理において
も認識したい物を見つける前処理として領域分割が利用
されている。この領域分割の性能の善し悪しがそのまま
処理全体の性能を左右することになるため、性能の高い
領域分割を行うことが必要とされている。
2. Description of the Related Art The process of dividing a read image into appropriate areas is an important technique mainly used as preprocessing in various fields of image processing. For example, in image editing, it is necessary to cut out an area to be edited, and at that time, area division of the image is used. Also, in the recognition processing, area division is used as a pre-processing for finding an object to be recognized. Since the performance of this area division directly affects the performance of the entire processing, it is necessary to perform area division with high performance.

【0003】また、近年では種々の画像処理で利用され
る共通性の高い画像特徴を抽出し、整理する際に、画像
の領域ごとにまとめて次の画像処理で利用しやすくする
処理が試みられているが、その際にも適切な画像の領域
分割が必要とされている。
Further, in recent years, when extracting and organizing highly common image features used in various image processes, a process is attempted in which the regions of the image are grouped together so that they can be easily used in the next image process. However, even in that case, it is necessary to appropriately divide the image area.

【0004】領域分割の手法としては、古くから知られ
ているK平均アルゴリズムがあり、また、画像のRGB
といった色成分に画像のxy座標を加えた5次元でK平
均アルゴリズムを実施する方法(色情報と位置情報とを
併用したセグメンテーション手法の一検討、1991年
電子情報通信学会春季全国大会予稿集参照)などが提案
されている。また、別のアプローチとしてMAP(最大
事後確率)推定を利用したものなども提案されている。
As a method of area division, there is a K-means algorithm which has been known for a long time.
A method of performing a K-means algorithm in five dimensions by adding the xy coordinates of the image to such color components (A study of a segmentation method using both color information and position information, see Proceedings of the 1991 IEICE Spring National Convention) Have been proposed. Further, as another approach, one using MAP (maximum posterior probability) estimation has been proposed.

【0005】[0005]

【発明が解決しようとする課題】しかし、これらの領域
分割技術によってカラー画像を分割するにあたり、適切
な領域数を決定するのは難しい問題である。また、RG
BやL* * * などの色空間にxy座標を加えた5次
元でK平均アルゴリズムによる領域分割を行う場合、一
つ一つの領域は画像上でも近い距離にまとまるが、必然
的に過分割となりやすく、あまり分割の必然性のない部
分まで分割されてしまうことになる。
However, in dividing a color image by these area dividing techniques, it is a difficult problem to determine an appropriate number of areas. Also, RG
When area division by the K-means algorithm is performed in five dimensions by adding xy coordinates to a color space such as B or L * a * b * , each area is gathered at a short distance on the image, but it is inevitably excessive. It is easy to be divided, and even parts that are not necessarily divided are divided.

【0006】また、このような過分割されるという性格
を利用して画像に美術的な価値を付加することを考える
と、適切な領域分割の基準を決めることが困難であり、
画像全体に渡って同じ条件を用いた場合に部分的に不適
切な分割が成されてしまうことが多い。よって、本発明
は、画像のテクスチャーの強さに応じて適切な領域分割
を行うことができる領域分割装置を提供することを目的
とする。
Further, considering that artistic value is added to an image by utilizing such a characteristic of being over-divided, it is difficult to determine a proper area division standard.
When the same condition is used over the entire image, inadequate division is often made partially. Therefore, it is an object of the present invention to provide an area dividing device capable of performing appropriate area division according to the strength of image texture.

【0007】[0007]

【課題を解決するための手段】本発明は、上記の目的を
達成するために成された領域分割装置である。すなわ
ち、本発明は、量子化されたフルカラーの画像データを
その絵柄に応じて領域分割するにあたり、所定の色空
間、画素のxy座標さらには画素のテクスチャー要素を
用いる領域分割装置であって、画素のテクスチャー要素
の強さの値に応じて領域分割の際のノイズ除去の条件を
変化させたり、画素のテクスチャー要素の強さの値に応
じて前記領域分割の際の領域の変形の大きさを変化させ
たり、画素のテクスチャー要素の強さの値に応じて前記
領域分割の際の分割の条件式に現れる所定の色空間およ
びxy座標の寄与率を変化させるものである。
SUMMARY OF THE INVENTION The present invention is an area dividing device made to achieve the above object. That is, the present invention is an area dividing device that uses a predetermined color space, xy coordinates of pixels, and a texture element of pixels when dividing the quantized full-color image data into areas according to the pattern. Depending on the value of the strength of the texture element of, the noise removal condition at the time of area division is changed, or the size of the deformation of the area at the time of the area division is changed according to the value of the strength of the texture element of the pixel. It is for changing or changing the contribution rate of a predetermined color space and xy coordinates appearing in the conditional expression of the division at the time of the area division according to the value of the strength of the texture element of the pixel.

【0008】[0008]

【作用】本発明では、画素のテクスチャー要素の強さの
値に応じて領域分割の際のノイズ除去の条件を変化させ
ているため、テクスチャーが複雑であるほどノイズ除去
の量を多くすることができる。また、画素のテクスチャ
ー要素の画素のテクスチャー要素の強さの値に応じて前
記領域分割の際の領域の変形の大きさを変化させている
ため、テクスチャーが複雑であるほど領域の変形量を大
きくできる。さらに、画素のテクスチャー要素の強さの
値に応じて前記領域分割の際の分割の条件式に現れる所
定の色空間およびxy座標の寄与率を変化させているた
め、テクスチャーが複雑であるほどxy座標による位置
の寄与率を上げ、テクスチャーが簡素であるほど色空間
の寄与率を上げることができるようになる。
In the present invention, the noise removal condition at the time of region division is changed according to the strength value of the texture element of the pixel. Therefore, the more the texture is complicated, the more the noise removal amount can be increased. it can. Further, since the size of the deformation of the area at the time of the area division is changed according to the value of the strength of the texture element of the pixel of the texture element of the pixel, the deformation amount of the area increases as the texture becomes more complicated. it can. Furthermore, since the contribution ratios of the predetermined color space and the xy coordinates appearing in the conditional expression of the division at the time of the area division are changed according to the value of the strength of the texture element of the pixel, xy becomes more complicated as the texture becomes more complicated. The contribution of the position by the coordinates can be increased, and the contribution of the color space can be increased as the texture is simpler.

【0009】[0009]

【実施例】以下に、本発明の領域分割装置における実施
例を図に基づいて説明する。図1は本発明における領域
分割装置を説明する図で、(a)は全体構成図、(b)
は領域分割手段の内部構成図である。図1(a)に示す
ように、この領域分割装置1は、画像入力手段2と、方
向成分付加手段3と、画像バッファ4と、領域分割手段
5と、分割領域バッファ6と、分割結果出力手段7とを
備えており、特に領域分割手段5内での画像分割処理に
特徴がある。
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT An embodiment of the area dividing device of the present invention will be described below with reference to the drawings. 1A and 1B are views for explaining an area dividing device according to the present invention. FIG. 1A is an overall configuration diagram, and FIG.
FIG. 4 is an internal configuration diagram of the area dividing means. As shown in FIG. 1A, the area dividing device 1 includes an image input means 2, a direction component adding means 3, an image buffer 4, an area dividing means 5, a divided area buffer 6, and a division result output. The image dividing processing in the area dividing means 5 is particularly characterized.

【0010】画像入力手段2は、原稿上の画像を読み取
りイメージスキャナや、コンピュータ処理により画像を
生成するもの、または領域分割装置1外で生成された画
像を内部に取り込むものである。なお、本実施例では画
像入力手段2からCIE(国際照明委員会)が推奨する
* * * 色空間で表現されたフルカラー画像が得ら
れる場合を例として説明する。
The image input means 2 is for reading an image on a document, for generating an image by an image scanner or computer processing, or for taking in an image generated outside the area dividing device 1 inside. In this embodiment, the case where a full-color image represented in the L * a * b * color space recommended by CIE (International Commission on Illumination) is obtained from the image input means 2 will be described as an example.

【0011】方向成分付加手段3は、画像のテクスチャ
ー要素を示すための方向成分を算出して取り込んだ画像
データに付加する部分である。また、画像バッファ4
は、画像入力手段2にて取り込んだ画像データ(例えば
* * * )とともに方向成分付加手段3にて算出し
た方向成分dを格納するものである。領域分割手段5
は、取り込んだ画像のL* * * 色空間、画素のxy
座標さらに画素のテクスチャー要素である方向成分dを
用いて領域分割処理を行う部分である。分割領域バッフ
ァ6は領域分割手段5が画像の領域分割を行う際の作業
用のバッファとして使用される。また、分割結果出力手
段7は、分割の結果を出力するディスプレイやプリンタ
などから構成されている。
The directional component adding means 3 is a part for calculating a directional component for indicating a texture element of an image and adding it to the captured image data. Also, the image buffer 4
Stores the direction component d calculated by the direction component adding unit 3 together with the image data (for example, L * a * b * ) captured by the image input unit 2. Area dividing means 5
Is the L * a * b * color space of the captured image, xy of pixels
This is a portion for performing area division processing using coordinates and a direction component d which is a texture element of pixels. The divided area buffer 6 is used as a working buffer when the area dividing means 5 divides an image into areas. Further, the division result output means 7 is composed of a display, a printer, etc. for outputting the division result.

【0012】次に、図1(b)に基づき本発明の特徴で
ある領域分割手段5での処理の概略を説明する。すなわ
ち、領域分割手段5は、画像バッファ4(図1(a)参
照)からフルカラー画像のL* * * にテクスチャー
成分が付加されたテクスチャー成分付加画像を得る。こ
こでテクスチャー成分としては図1(a)に示す方向成
分付加手段3で算出された方向成分が用いられている。
Next, the outline of the processing in the area dividing means 5, which is a feature of the present invention, will be described with reference to FIG. That is, the area dividing unit 5 obtains the texture component added image in which the texture component is added to L * a * b * of the full color image from the image buffer 4 (see FIG. 1A). Here, the direction component calculated by the direction component adding means 3 shown in FIG. 1A is used as the texture component.

【0013】領域分割手段5の内部は、ノイズ除去強さ
決定手段51、変形強さ決定手段52、寄与率変化手段
53を備えるとともに、領域決定手段54および分割終
了判定手段55を備えている。領域分割手段5に入力さ
れたテクスチャー成分付加画像は、このノイズ除去強さ
決定手段51、変形強さ決定手段52、寄与率変化手段
53の全てまたは一つもしくは二つに入力され、所定の
条件に応じて領域決定手段54により注目画素の属する
領域が決定される。そして、全ての画素が適切な分割領
域に属していると分割終了判定手段55で判断した場合
にはその分割結果を出力し、適切でない場合には新たな
条件のもので領域分割処理を繰り返すことになる。
The area dividing means 5 includes a noise removal strength determining means 51, a deformation strength determining means 52, a contribution rate changing means 53, and an area determining means 54 and a division end determining means 55. The texture component added image input to the area dividing unit 5 is input to all or one or two of the noise removal strength determining unit 51, the deformation strength determining unit 52, and the contribution rate changing unit 53, and the predetermined condition is satisfied. The area determining means 54 determines the area to which the pixel of interest belongs. Then, when the division end determination unit 55 determines that all the pixels belong to an appropriate division area, the division result is output, and when it is not appropriate, the area division processing is repeated under a new condition. become.

【0014】次に、本発明の特徴である画像の領域分割
処理を図に基づいて詳細に説明する。先ず、図1(a)
に示す画像入力手段2によって読み込んだ画像は画像バ
ッファ4に取り込まれる。また、画像データはその方向
要素を抽出するため方向成分付加手段3にも入力され
る。ここで抽出された方向成分dは画像バッファ4の空
いているスペースに書き込まれる。方向要素の抽出は、
例えば電子技術総合研究所報告第835号第80頁に記
載された方法を応用する。
Next, the image area dividing process, which is a feature of the present invention, will be described in detail with reference to the drawings. First, FIG. 1 (a)
The image read by the image input means 2 shown in FIG. The image data is also input to the direction component adding means 3 to extract the direction element. The direction component d extracted here is written in an empty space of the image buffer 4. Direction element extraction is
For example, the method described on page 80 of Report No. 835 of Electrotechnical Laboratory is applied.

【0015】すなわち、図2に示す入力画像Gのうち例
えばL* のみを入力画像として用い、図3に示す方向要
素抽出マスクによってその方向要素を抽出する。なお、
ここではL* のみを用いているがL* * * 全てを用
いて方向要素を抽出してもよい。方向要素を抽出するに
は、注目画素を中心とする3×3の画素のL* 値と対応
するマスクの値とを掛け合わせその和をとる。図3
(a)は水平方向用のマスクであり、図3(b)は垂直
方向用のマスクである。例えば、3×3の画素から成る
画像GとマスクMとを合わせ、マスクMが重なった画素
をそれぞれG(1,1)、G(1,2)、…、G(3,
3)とし、画像GとマスクMとの対応する各要素同士を
掛け合わせてそれを加算する。この操作を(G*M)と
する。
That is, for example, only L * of the input image G shown in FIG. 2 is used as an input image, and the directional element is extracted by the directional element extraction mask shown in FIG. In addition,
Although only L * is used here, direction elements may be extracted using all L * a * b * . To extract the directional element, the L * value of a 3 × 3 pixel centered on the pixel of interest is multiplied by the corresponding mask value and the sum is taken. FIG.
3A shows a mask for the horizontal direction, and FIG. 3B shows a mask for the vertical direction. For example, an image G made up of 3 × 3 pixels and a mask M are combined, and the pixels where the mask M overlaps are G (1,1), G (1,2), ..., G (3, respectively.
3), the corresponding elements of the image G and the mask M are multiplied and added. This operation is (G * M).

【0016】そして、CC=(G*M1 )、LL=(G
*M2 )として数1に示す計算を行う。
CC = (G * M1), LL = (G
* M2) is calculated as shown in Equation 1.

【数1】 d1=16(tan-1(LL/CC)+π/2)/π d2=Log(LL×LL+CC×CC+1)## EQU1 ## d1 = 16 (tan -1 (LL / CC) + π / 2) / π d2 = Log (LL × LL + CC × CC + 1)

【0017】このd1は画像の方向成分の向きを0〜1
5までの16段階で表し、d2は画像の方向成分の大き
さを0〜15の16段階で表すものである。つまり、d
1は画像の濃度を等高線表示した場合の等高線の方向を
表すことになるので、180度反対向きの方向要素は同
一のものとなる、すなわち、図4に示すように0〜πま
での角度を16段階で記述できるようにすることで十分
である。このため、d1、d2両方合わせて8ビットが
必要となる。この8ビットのデータをL* ** の方
向成分として入力画像に付け加え、テクスチャー成分付
加画像としてこれに基づく領域分割を行う。
This d1 is the direction of the direction component of the image 0 to 1.
5 is represented by 16 stages, and d2 represents the magnitude of the direction component of the image by 16 stages of 0-15. That is, d
Since 1 represents the direction of the contour line when the density of the image is displayed in contour lines, the direction elements of 180 degrees opposite directions are the same, that is, as shown in FIG. It is sufficient to be able to describe in 16 steps. Therefore, 8 bits are required for both d1 and d2. This 8-bit data is added to the input image as a direction component of L * a * b * , and a region division based on this is performed as a texture component added image.

【0018】次に、図1に示す領域分割手段5により画
像の領域分割を行う。領域分割方法としては、画像バッ
ファ4に格納されたL* * * と先に算出した方向成
分d(d1とd2)さらに画素の位置情報であるxy座
標を用いた6次元でK平均アルゴリズムを適用する。
Next, the area dividing means 5 shown in FIG. 1 divides the image into areas. As the area division method, a 6-dimensional K-means algorithm using L * a * b * stored in the image buffer 4, the direction component d (d1 and d2) calculated previously, and the xy coordinates that are the pixel position information is used. Apply.

【0019】K平均アルゴリズムはカラー画像を領域分
割するために応用できるクラスタリング手法の一つであ
る。つまり、入力画像に対してK個の初期領域中心を設
定し、初期領域分割を含めて分割された領域が一定の状
態に収束するまで数次にわたって領域分割を繰り返し、
初期領域分割後の各次の領域分割後において各分割領域
の領域中心を計算することにより、分割された領域が一
定の状態に収束したか否かを判定するものである。
The K-means algorithm is one of the clustering methods that can be applied to segment a color image. That is, K initial region centers are set for the input image, and the region division is repeated over several orders until the divided regions including the initial region division converge to a certain state,
By calculating the area center of each divided area after each subsequent area division after the initial area division, it is determined whether or not the divided areas converge to a certain state.

【0020】例えば、図5(a)に示すように、K=1
00として縦横10×10の枡目を作り、各々の枡目の
重心に位置する画素を領域中心Ci,j (iは繰り返し計
算した回数、初期値はi=0で表す。すなわち初期領域
中心はC0,j である。また、jは分割領域番号でj=
1,2,…,100)とした例をもとに説明する。さら
に、初期領域中心の位置を(x0,j , y0,j )と表し、
また、その画素のL* * * 値を分割領域の代表色
(L* 0,j , a* 0,j , b* 0,j )とする。さらに、方
向成分に関しては、d2<1であれば方向を持たないも
のとして、無方向の記号である「16」をd1に代入し
ておく。
For example, as shown in FIG. 5A, K = 1
As 00, make vertical and horizontal 10 × 10 cells, and each cell
The pixel located at the center of gravity is set to the region center Ci, j (i is the
The number of calculations and the initial value are represented by i = 0. Ie the initial area
The center is C0, j. Further, j is a divided area number and j =
1, 2, ..., 100) will be described. Further
, The position of the center of the initial region is represented as (x0, j, y0, j),
In addition, L of the pixel*a *b*Representative color of the divided area
(L*0, j, a*0, j, b*0, j). Furthermore,
Regarding the directional component, if d2 <1, it has no direction.
Substituting the non-directional symbol “16” into d1
Keep it.

【0021】そして、図6(a)に示すように、分割領
域テーブルにおける特徴バッファにはd2の値は記録せ
ずd1の値のみを記録しておく。以下、これらをまとめ
て領域中心(xi,j ,yi,j ,L* i,j ,a* i,j ,b
* i,j ,d1i,j )と表すことにする。
Then, as shown in FIG. 6A, the value of d2 is not recorded in the feature buffer in the divided area table, but only the value of d1 is recorded. Hereinafter, these are collectively summarized as the region center (xi, j, yi, j, L * i, j, a * i, j, b
* i, j, d1i, j).

【0022】次に領域分割手段5(図1参照)は画像を
構成している画素を画像バッファ4(図1(a)参照)
から一つずつ順に取り出し、その画素の値(xn ,yn
,L * n ,a* n ,b* n ,d1n , d2n )(nは
画素番号を示す)と全ての領域中心(xi,j ,yi,j ,
* i,j ,a* i,j ,b* i,j ,d1i,j )との距離L
i,j,n を数2によって求める(初期領域中心はi=
0)。ここでは先ず、各成分の差を求めることになる。
Next, the area dividing means 5 (see FIG. 1) displays the image.
The constituent pixels are displayed in the image buffer 4 (see FIG. 1A).
From each pixel in turn, and the pixel value (xn, yn
 , L *n, a*n, b*n, d1n, d2n) (n is
Pixel number) and the center of all areas (xi, j, yi, j,
L*i, j, a*i, j, b*i, j, d1i, j) distance L
i, j, n is calculated by Equation 2 (the center of the initial region is i =
0). Here, first, the difference of each component is obtained.

【数2】dxi,j,n =(xi,j −xn ) dyi,j,n =(yi,j −yn ) dli,j,n =(L* i,j −L* n ) dai,j,n =(a* i,j −a* n ) dbi,j,n =(b* i,j −b* n )## EQU2 ## dxi, j, n = (xi, j-xn) dyi, j, n = (yi, j-yn) dli, j, n = (L * i, j-L * n) dai, j , n = (a * i, j-a * n) dbi, j, n = (b * i, j-b * n)

【0023】また、方向成分の距離については極座標の
角度成分になるので、d1i,j =16かつd1n <1の
場合はどちらも方向成分がないため差は小さいと考え、
数3のようになる。
Further, since the distance of the directional component is an angular component of polar coordinates, when d1i, j = 16 and d1n <1, there is no directional component, so the difference is small.
It becomes like number 3.

【数3】ddi,j,n =0## EQU3 ## ddi, j, n = 0

【0024】一方、d1i,j <16かつd1n ≧1の場
合は数4のようになる。
On the other hand, when d1i, j <16 and d1n ≥1, the following equation 4 is obtained.

【数4】ddi,j,n =|d1n −d1i,j | ただし、ddi,j,n >8の場合は、 ddi,j,n =16−|d1n −d1i,j |Ddi, j, n = | d1n-d1i, j | However, in the case of ddi, j, n> 8, ddi, j, n = 16- | d1n-d1i, j |

【0025】なお、(d1i,j =16かつd1n ≧1)
または(d1i,j ≠16かつd1n<1)の場合は、互
いに角度がπ/2異なる方向成分同士の距離よりは近
く、同じ方向を向いている方向成分同士の距離よりは離
れているものと考え、0と7の間をとって数5のように
する。
(D1i, j = 16 and d1n ≧ 1)
Alternatively, in the case of (d1i, j ≠ 16 and d1n <1), it is considered that the distance between the direction components whose angles are different from each other by π / 2 is shorter and the distance between the direction components facing the same direction is greater. Think about it, and take a value between 0 and 7 as in Equation 5.

【数5】ddi,j,n =3.5## EQU5 ## ddi, j, n = 3.5

【0026】このような式により領域分割手段5(図1
参照)が各画素の値と全ての領域中心との距離を求め、
その画素の属する領域を更新していく。この際、二つの
領域が複雑に入り組み合うところどころに発生する孤立
点がノイズのように目障りとなるため、図1(b)に示
すノイズ除去強さ決定手段51によりノイズの削除を行
う。
The area dividing means 5 (see FIG.
Is the distance between each pixel value and the center of all regions,
The area to which the pixel belongs is updated. At this time, an isolated point generated at a place where the two areas are complicatedly intermingled becomes an annoyance like noise, and therefore noise is deleted by the noise removal strength determination means 51 shown in FIG. 1B.

【0027】ノイズの削除を行うには、先ず図7に示す
注目画素nと隣接している画素のうち、注目画素nと異
なる領域に属している画素数Nneighborを数え、Nneig
hbor 2 を求める。例えば図7のハッチングで示される画
素は注目画素nの属する領域と異なる領域に属している
ものであり、この例では4画素となる。したがってNne
ighbor=4でNneighbor2 =16となる。
To remove noise, first, FIG. 7 is shown.
Of the pixels adjacent to the pixel of interest n, it differs from the pixel of interest n.
The number of pixels Nneighbor that belong to the area
hbor 2Ask for. For example, the image shown by hatching in FIG.
The element belongs to a region different from the region to which the pixel of interest n belongs.
The number of pixels is four in this example. Therefore Nne
ighbor = 4 and Nneighbor2= 16.

【0028】そして、このNneighbor2 の項を先に算出
した距離Li,j,n に加える。これによって、注目画素n
が分割領域jに属すると孤立点に近くなる場合は、注目
画素nに接している画素のうち注目画素nと異なる分割
領域に属している画素が多くなるので、Nneighbor2
値が大きくなり距離Li,j,n も大きくなる。したがっ
て、分割領域jが孤立点となる状態が発生しにくくな
り、ノイズ削除を実現することができるようになる。
Then, this Nneighbor 2 term is added to the previously calculated distance Li, j, n. As a result, the pixel of interest n
When the pixel belongs to the divided area j and is close to the isolated point, many pixels belonging to the divided area different from the pixel of interest n out of the pixels in contact with the pixel of interest n increase the value of Nneighbor 2 and increase the distance. Li, j, n also becomes large. Therefore, it becomes difficult for the divided area j to become an isolated point, and noise elimination can be realized.

【0029】また、図1(b)に示す領域分割手段5の
変形強さ決定手段52において、領域分割を行う際の領
域の変形を制御する関数Fx()、Fy()を以下のよ
うに定義しておく。先ず、|d1n −16π/(tan
-1(−dyi,j,n /dxi,j,n )+π/2)|<8の場
合は数6のように定義する。
Further, in the deformation strength determining means 52 of the area dividing means 5 shown in FIG. 1B, the functions Fx () and Fy () for controlling the deformation of the area when the area division is performed are as follows. Define it. First, | d1n -16π / (tan
In the case of -1 (-dyi, j, n / dxi, j, n) + π / 2) | <8, it is defined as in Equation 6.

【数6】 Fx(d1n ,dyi,j,n ,dxi,j,n )=|d1n −16π/(tan-1(− dyi,j,n /dxi,j,n )+π/2)|/3.5−1 Fy(d1n ,dyi,j,n ,dxi,j,n )=|d1n −16π/(tan-1(− dyi,j,n /dxi,j,n )+π/2)|/3.5−1Fx (d1n, dyi, j, n, dxi, j, n) = | d1n −16π / (tan −1 (−dyi, j, n / dxi, j, n) + π / 2) | / 3.5-1 Fy (d1n, dyi, j, n, dxi, j, n) = | d1n-16 [pi] / (tan- 1 (-dyi, j, n / dxi, j, n) + [pi] / 2) | /3.5-1

【0030】また、|d1n −16π/(tan-1(−
dyi,j,n /dxi,j,n )+π/2)|≧8の場合は数
7のように定義する。
Also, | d1n-16π / (tan -1 (-
In the case of dyi, j, n / dxi, j, n) + π / 2) | ≧ 8, it is defined as in Expression 7.

【数7】 Fx(d1n ,dyi,j,n ,dxi,j,n )=16−|d1n −16π/(tan -1 (−dyi,j,n /dxi,j,n )+π/2)|/3.5−1 Fy(d1n ,dyi,j,n ,dxi,j,n )=16−|d1n −16π/(tan -1 (−dyi,j,n /dxi,j,n )+π/2)|/3.5−1## EQU00007 ## Fx (d1n, dyi, j, n, dxi, j, n) = 16- | d1n-16.pi ./ (tan -1 (-Dyi, j, n /dxi,j,n)+π/2)|/3.5-1 Fy (d1n, dyi, j, n, dxi, j, n) = 16- | d1n-16π / ( tan -1 (-Dyi, j, n / dxi, j, n) + π / 2) | /3.5-1

【0031】数6、数7に示すように領域分割を行う際
の領域の変形を制御する関数Fx()、Fy()を変形
強さ決定手段52(図1(b)参照)で定義しておくこ
とにより、方向成分すなわちテクスチャー要素の強さの
値に応じて領域変形の割合を変化させることができるよ
うになる。
As shown in equations (6) and (7), the functions Fx () and Fy () for controlling the deformation of the area when the area division is performed are defined by the deformation strength determining means 52 (see FIG. 1 (b)). This allows the ratio of area deformation to be changed according to the direction component, that is, the value of the strength of the texture element.

【0032】さらに、図1(b)に示す寄与率変化手段
53によってH(d2i,j )を数8のように定義してお
く。
Further, H (d2i, j) is defined by the contribution rate changing means 53 shown in FIG.

【数8】 H(d2i,j )=1/(1.1+exp(11・(0.8−N))) ここで、N=N2i,j /Ni,j ただし、Ni,j は分割領域jに含まれる画素の個数、N
2i,j は分割領域jに含まれかつd2i,j >1となる画
素の個数である。
## EQU00008 ## H (d2i, j) = 1 / (1.1 + exp (11. (0.8-N))) where N = N2i, j / Ni, j where Ni, j is the divided region j The number of pixels included in N
2i, j is the number of pixels included in the divided area j and satisfying d2i, j> 1.

【0033】領域分割手段5(図1参照)は、以上の項
を合わせて数9に示すような計算を行う。
The area dividing means 5 (see FIG. 1) performs the calculation shown in the equation 9 by combining the above terms.

【数9】 Li,j,n =kx(1+kh1・H(d2i,j ))・(1+kh2・H(d2i,j )・Fx(d1n ,dyi,j,n ,dxi,j,n ))・dxi,j,n 2 +ky(1+k h1・H(d2i,j ))・(1+kh2・H(d2i,j )・Fy(d1n ,dy i,j,n ,dxi,j,n ))・dyi,j,n 2 +kl・dli,j,n 2 +ka・dai,j, n 2 +kb・dbi,j,n 2 +kh3・H(d2i,j )Nneighbor2 +kd・dd i,j,n 2 Li, j, n = kx (1 + kh1.H (d2i, j)). (1 + kh2.H (d2i, j) .Fx (d1n, dyi, j, n, dxi, j, n)). dxi, j, n 2 + ky (1 + kh1.H (d2i, j)). (1 + kh2.H (d2i, j) .Fy (d1n, dyi, j, n, dxi, j, n)). dyi, j, n 2 + kl · dli, j, n 2 + ka · dai, j, n 2 + kb · dbi, j, n 2 + kh3 · H (d2i, j) Nneighbor 2 + kd · ddi, j, n 2

【0034】なお、数9において、iは繰り返し計算し
た回数、nは画素番号、jは分割領域番号、kx、k
y、kh1、kh2、kh3、kl、ka、kb、kd
は任意の係数であり、ここではkh2=0.9とし他の
係数は全て「1」とする。
In Equation 9, i is the number of times of repeated calculation, n is a pixel number, j is a divided area number, kx, k.
y, kh1, kh2, kh3, kl, ka, kb, kd
Is an arbitrary coefficient, and here kh2 = 0.9 and all other coefficients are "1".

【0035】つまり、Nは領域内のテクスチャー要素の
強さの値が小さければ「0」に近くなり、テクスチャー
要素の強さの値が大きくなればなるほど「1」に近い値
となる。このため、テクスチャー要素の強さの値が小さ
ければH(d2i,j )≒0となるので距離Li,j,n への
xy座標の寄与も領域変形の割合もノイズ除去の強さも
最小となる。反対にテクスチャー要素の強さの値が大き
くなれば距離Li,j,nへのxy座標の寄与も領域変形の
割合もノイズ除去の強さも大きくなる。
That is, N is closer to "0" when the strength value of the texture element in the area is smaller, and is closer to "1" as the strength value of the texture element is larger. Therefore, if the value of the strength of the texture element is small, H (d2i, j) ≈0. Therefore, the contribution of the xy coordinates to the distance Li, j, n, the ratio of area deformation, and the strength of noise removal are minimized. . On the contrary, when the value of the strength of the texture element increases, the contribution of the xy coordinates to the distance Li, j, n, the area deformation ratio, and the noise removal strength increase.

【0036】このような処理を全ての画素に対して実行
すると、分割領域が更新され形が変わる(図5(b)参
照)。また、新たに分割された領域ごとに重心(xi +
1,j, yi +1,j )、代表色(L* i +1,j , a* i +
1,j , b* i +1,j )、代表画像方向成分d1i +1,j
を計算し直し、その値を分割領域バッファ6(図1
(a)参照)に書き込んで値を更新する(図6(b)参
照)。
When such processing is executed for all pixels, the divided areas are updated and the shape changes (see FIG. 5B). Also, the center of gravity (xi +
1, j, yi + 1, j), representative color (L * i + 1, j, a * i +)
1, j, b * i + 1, j), representative image direction component d1i + 1, j
Is calculated again, and the value is divided into the divided area buffer 6 (see FIG.
(See (a)) to update the value (see FIG. 6 (b)).

【0037】一般に、代表色(L* i +1,j , a* i +
1,j , b* i +1,j )は新たに分割された分割領域内に
ある画素の単純な算術平均とする。また、重心は分割さ
れた領域に含まれる全ての画素のxy座標値をそれぞれ
合計し、両者を領域に含まれる画素数で割った値とす
る。このような計算で以上の6つの値を持つ仮想的な画
素を求め、新たな領域中心とする。
In general, the representative colors (L * i + 1, j, a * i +
1, j, b * i + 1, j) is a simple arithmetic average of pixels in the newly divided divided area. Further, the center of gravity is a value obtained by summing the xy coordinate values of all pixels included in the divided areas and dividing both by the number of pixels included in the area. By such calculation, a virtual pixel having the above six values is obtained and set as a new area center.

【0038】この処理を繰り返し実行すると領域は段々
一定の形に収束していき、あまり変形しなくなる。図1
(b)に示す分割終了判定手段55は、この領域の変形
量を算出し、この値に基づいて分割処理を打ち切るよう
判断している。
When this process is repeatedly executed, the region gradually converges to a constant shape and is not deformed so much. FIG.
The division end determination means 55 shown in (b) calculates the deformation amount of this area and determines to terminate the division processing based on this value.

【0039】すなわち、領域中心が収束したか否かの判
断は、新たな領域中心(xi,j ,yi,j ,L* i,j ,a
* i,j ,b* i,j ,d1i,j )と前の領域中心(xi-1,
j ,yi-1,j ,L* i-1,j ,a* i-1,j ,b* i-1,j ,
d1i-1,j )とのずれ量Di,j (iは繰り返し計算した
回数、jは分割領域番号)を計算し、ずれた距離が無視
できるほど小さくなった場合に収束したと判断して領域
分割処理を終了するようにしている。このずれ量Di,j
を計算する場合には、先ず各成分の差を数10によって
求める。
That is, whether or not the center of the region has converged is determined by the new center of the region (xi, j, yi, j, L * i, j, a).
* i, j, b * i, j, d1i, j) and the previous region center (xi-1,
j, yi-1, j, L * i-1, j, a * i-1, j, b * i-1, j,
d1i-1, j) and the deviation amount Di, j (i is the number of times of repeated calculation, j is the divided area number) are calculated, and if the deviation distance becomes negligibly small, it is determined that the area has converged. The division process is ended. This shift amount Di, j
In the case of calculating, the difference between the components is first obtained by the equation 10.

【数10】Dxi,j =(xi,j −xi-1,j ) Dyi,j =(yi,j −yi-1,j ) Dli,j =(L* i,j −L* i-1,j ) Dai,j =(a* i,j −a* i-1,j ) Dbi,j =(b* i,j −b* i-1,j )## EQU10 ## Dxi, j = (xi, j-xi-1, j) Dyi, j = (yi, j-yi-1, j) Dli, j = (L * i, j-L * i-1) , j) Dai, j = (a * i, j-a * i-1, j) Dbi, j = (b * i, j-b * i-1, j)

【0040】また、方向成分に関しては次のようにす
る。先ず、d1i,j =16かつd1i-1,j =16の場
合、つまり方向成分がない場合は数11のようにする。
The direction component is as follows. First, when d1i, j = 16 and d1i-1, j = 16, that is, when there is no directional component, the equation 11 is used.

【数11】Ddi,j =0[Equation 11] Ddi, j = 0

【0041】また、D1i,j <16かつD1i-1,j <1
6の場合は数12のようにする。
Further, D1i, j <16 and D1i-1, j <1
In the case of 6, the formula 12 is used.

【数12】Ddi,j =|d1i,j −d1i-1,j | なお、Ddi,j >8の場合は、 Ddi,j =16−|d1i,j −d1i-1,j |## EQU12 ## Ddi, j = | d1i, j -d1i-1, j | When Ddi, j> 8, Ddi, j = 16- | d1i, j -d1i-1, j |

【0042】さらに、(d1i,j =16かつd1i-1,j
<16)または(d1i,j <16かつd1i-1,j =1
6)の場合は、互いに角度がπ/2異なる方向成分同士
の距離よりは近く、同じ方向を向いている方向成分同士
の距離よりは離れているものと考え、0と7の間をとっ
て数13のようにする。
Further, (d1i, j = 16 and d1i-1, j
<16) or (d1i, j <16 and d1i-1, j = 1
In the case of 6), it is considered that they are closer than the distance between the direction components whose angles are different from each other by π / 2 and are farther than the distance between the direction components facing in the same direction. As in Equation 13.

【数13】Ddi,j =3.5[Equation 13] Ddi, j = 3.5

【0043】そして、これらを基にして、新たな領域中
心と前の領域中心とのずれ量Di,jを数14により計算
する。
Then, based on these, the shift amount Di, j between the center of the new area and the center of the previous area is calculated by the equation (14).

【数14】 Di,j =kx(1+kh1・H(d2i,j ))・Dxi,j 2 +ky(1+kh1 ・H(d2i,j ))・Dyi,j 2 +kl・Dli,j 2 +ka・Dai,j 2 +kb ・Dbi,j 2 +kd・Ddi,j 2 Equation 14] Di, j = kx (1 + kh1 · H (d2i, j)) · Dxi, j 2 + ky (1 + kh1 · H (d2i, j)) · Dyi, j 2 + kl · Dli, j 2 + ka · Dai, j 2 + kb ・ Dbi, j 2 + kd ・ Ddi, j 2

【0044】なお、数14において、iは繰り返し計算
した回数、jは分割領域番号、kx、ky、kh1、k
l、ka、kb、kdは任意の係数であり、ここではす
べて「1」としている。
In the equation 14, i is the number of times of repeated calculation, j is the divided area number, kx, ky, kh1, k
l, ka, kb, and kd are arbitrary coefficients, and are all set to "1" here.

【0045】図1(b)に示す分割終了判定手段55
は、このずれ量Di,j の計算を繰り返し行い、無視でき
るほど小さくなった場合に分割処理を終了して、その結
果(分割結果)を出力する。このようにして領域中心が
収束した際の画像は、人間の感覚により合った領域分割
が成されている状態となる。
The division end judging means 55 shown in FIG. 1 (b).
Repeats the calculation of the deviation amount Di, j, ends the division processing when the deviation amount becomes small enough to be ignored, and outputs the result (division result). In this way, the image when the center of the region converges is in a state in which the region division is made according to the human sense.

【0046】例えば、人間の頭髪部分の画像は流れるよ
うなテクスチャーが強く現れており、テクスチャー要素
の強さの値は大きくなっている。この部分を本発明の領
域分割装置1にて領域分割すると、ノイズ除去の量も多
くなり、領域変形の大きさも大きく、分割条件式におけ
るxy座標の寄与率も大きくなって頭髪部分の絵柄に沿
った(見た目の感覚に合った)細長い形状での領域分割
が成される。
For example, in the image of the human hair portion, a flowing texture appears strongly, and the value of the strength of the texture element is large. When this area is divided into areas by the area dividing device 1 of the present invention, the amount of noise removal is increased, the size of the area deformation is large, the contribution ratio of the xy coordinates in the division conditional expression is large, and the pattern of the hair portion is followed. The area is divided into long and narrow shapes (which match the appearance).

【0047】一方、人間の顔の部分の画像はテクスチャ
ーが少ないので、テクスチャー要素の強さの値は小さく
なる。この部分を本発明の領域分割装置1にて領域分割
すると、ノイズ除去の量も少なくなり、領域変形の大き
さも小さく、分割条件式におけるxy座標の寄与率も小
さく(色空間の寄与率が大きく)なって、顔の特徴がよ
く現れるような略円状の領域分割が成されることにな
る。
On the other hand, since the image of the human face portion has a small amount of texture, the strength value of the texture element becomes small. When this area is divided into areas by the area dividing device 1 of the present invention, the amount of noise removal is reduced, the magnitude of area deformation is small, and the contribution ratio of the xy coordinates in the division conditional expression is small (the contribution ratio of the color space is large. ), A substantially circular area division is performed so that facial features often appear.

【0048】なお、本実施例においては、画素のテクス
チャー要素の強さの値に応じてノイズ除去の条件を変化
させ、領域分割の際の領域変形の大きさを変化させ、さ
らに分割の条件式における色空間およびxy座標の寄与
率を変化させることを全て行う例を中心に説明したが、
本発明はこれに限定されず、各々個別にその条件を変化
させるようにしてもよい。また、どの条件を適用するか
をオペレータによって選択できるような構成としてもよ
い。また、本実施例では色空間としてL* ** を用
いたが、これ以外のL* * * 色空間など他の色空間
を用いた場合であっても同様である。
In this embodiment, the noise removal condition is changed according to the value of the strength of the texture element of the pixel, the size of the region deformation at the time of region division is changed, and the conditional expression of the division is further obtained. The explanation has been centered on an example in which all the contributions of the color space and the xy coordinates in the above are changed.
The present invention is not limited to this, and the conditions may be changed individually. Further, the operator may select which condition to apply. Further, although L * a * b * is used as the color space in the present embodiment, the same is true when other color spaces such as the L * u * v * color space other than this are used.

【0049】[0049]

【発明の効果】以上説明したように、本発明の領域分割
装置には次のような効果がある。すなわち、領域分割の
際に画像自身のテクスチャーの強さの値に応じて適応的
にノイズ除去の条件や領域変形の割合、分割の条件式に
現れる色空間およびxy座標の寄与率を変換させること
ができるため、人間の感覚に合った領域分割を行うこと
が可能となる。これによって、画像編集などの画像処理
に適した前処理を行うことが可能となる。
As described above, the area dividing device of the present invention has the following effects. That is, when the area is divided, the noise removal condition, the area deformation ratio, the color space and the xy coordinate contribution rate appearing in the conditional expression of the division are adaptively converted according to the value of the strength of the texture of the image itself. Therefore, it is possible to perform the region segmentation that suits the human sense. This makes it possible to perform preprocessing suitable for image processing such as image editing.

【図面の簡単な説明】[Brief description of drawings]

【図1】 本発明の領域分割装置を説明する図で、
(a)は全体構成図、(b)は領域分割手段の内部構成
図である。
FIG. 1 is a diagram illustrating an area dividing device of the present invention,
(A) is an overall block diagram, (b) is an internal block diagram of the area dividing means.

【図2】 画像データの抽出を示す図である。FIG. 2 is a diagram showing extraction of image data.

【図3】 方向要素抽出マスクを示す図で、(a)は水
平方向用、(b)は垂直方向用である。
3A and 3B are diagrams showing directional element extraction masks, in which FIG. 3A is for a horizontal direction and FIG. 3B is for a vertical direction.

【図4】 方向成分の表示例を示す図である。FIG. 4 is a diagram showing a display example of a directional component.

【図5】 領域分割の状態を示す図で、(a)は初期状
態、(b)は更新後の状態である。
FIG. 5 is a diagram showing a state of area division, in which (a) is an initial state and (b) is a state after updating.

【図6】 分割領域テーブルの例を示す図で、(a)は
初期値、(b)はi回更新後である。
FIG. 6 is a diagram showing an example of a divided area table, in which (a) is an initial value and (b) is after i times of updating.

【図7】 ノイズ除去を説明する図である。FIG. 7 is a diagram illustrating noise removal.

【符号の説明】[Explanation of symbols]

1 領域分割装置 2 画像入力手段 3 方向成分付加手段 4 画像バッファ 5 領域分割手段 6 分割領域バッファ 7 分割結果出力手段 51 ノイズ除去強さ決定手段 52 変形強さ決定手段 53 寄与率変化手段 54 領域決定手段 55 分割終了判定手段 1 area dividing device 2 image input means 3 direction component adding means 4 image buffer 5 area dividing means 6 divided area buffer 7 division result output means 51 noise removal strength determining means 52 deformation strength determining means 53 contribution rate changing means 54 area determining Means 55 Division end judging means

Claims (3)

【特許請求の範囲】[Claims] 【請求項1】 量子化されたフルカラーの画像データを
その絵柄に応じて領域分割するにあたり、所定の色空
間、画素のxy座標さらには画素のテクスチャー要素を
用いる領域分割装置であって、 前記画素のテクスチャー要素の強さの値に応じて前記領
域分割の際のノイズ除去の条件を変化させることを特徴
とする領域分割装置。
1. A region dividing device that uses a predetermined color space, xy coordinates of pixels, and a texture element of the pixel when dividing the quantized full-color image data into regions according to the picture pattern. A region dividing device, characterized in that the noise removal condition at the time of the region division is changed according to the value of the strength of the texture element.
【請求項2】 量子化されたフルカラーの画像データを
その絵柄に応じて領域分割するにあたり、所定の色空
間、画素のxy座標さらには画素のテクスチャー要素を
用いる領域分割装置であって、 前記画素のテクスチャー要素の強さの値に応じて前記領
域分割の際の領域の変形の大きさを変化させることを特
徴とする領域分割装置。
2. A region dividing device that uses a predetermined color space, xy coordinates of pixels, and a texture element of pixels when dividing the quantized full-color image data into regions according to the picture pattern. The area dividing device, wherein the size of deformation of the area at the time of area division is changed according to the strength value of the texture element.
【請求項3】 量子化されたフルカラーの画像データを
その絵柄に応じて領域分割するにあたり、所定の色空
間、画素のxy座標さらには画素のテクスチャー要素を
用いる領域分割装置であって、 前記画素のテクスチャー要素の強さの値に応じて前記領
域分割の際の分割の条件式に現れる色空間およびxy座
標の寄与率を変化させることを特徴とする領域分割装
置。
3. A region dividing device that uses a predetermined color space, xy coordinates of pixels, and a texture element of pixels when dividing the quantized full-color image data into regions according to the pattern thereof. The area dividing device, wherein the contribution ratios of the color space and the xy coordinates appearing in the conditional expression of the division at the time of the area division are changed according to the strength value of the texture element.
JP7063653A 1995-03-23 1995-03-23 Area dividing device Expired - Fee Related JP3055423B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP7063653A JP3055423B2 (en) 1995-03-23 1995-03-23 Area dividing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP7063653A JP3055423B2 (en) 1995-03-23 1995-03-23 Area dividing device

Publications (2)

Publication Number Publication Date
JPH08263647A true JPH08263647A (en) 1996-10-11
JP3055423B2 JP3055423B2 (en) 2000-06-26

Family

ID=13235533

Family Applications (1)

Application Number Title Priority Date Filing Date
JP7063653A Expired - Fee Related JP3055423B2 (en) 1995-03-23 1995-03-23 Area dividing device

Country Status (1)

Country Link
JP (1) JP3055423B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005190474A (en) * 2003-12-09 2005-07-14 Mitsubishi Electric Information Technology Centre Europa Bv Method of classifying pixel in image, image signal processing method, skin detection method using the method, method and device configured to perform encoding, transmission, reception, or decoding using data derived by the method, device configured to execute the method, computer program for executing the method, and computer readable storage medium having computer program stored therein
JP2005293555A (en) * 2004-03-10 2005-10-20 Seiko Epson Corp Identification of skin area in image
KR100908384B1 (en) * 2002-06-25 2009-07-20 주식회사 케이티 Region-based Texture Extraction Apparatus Using Block Correlation Coefficient and Its Method
WO2011030756A1 (en) * 2009-09-14 2011-03-17 国立大学法人東京大学 Region segmentation image generation method, region segmentation image generation device, and computer program

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100908384B1 (en) * 2002-06-25 2009-07-20 주식회사 케이티 Region-based Texture Extraction Apparatus Using Block Correlation Coefficient and Its Method
JP2005190474A (en) * 2003-12-09 2005-07-14 Mitsubishi Electric Information Technology Centre Europa Bv Method of classifying pixel in image, image signal processing method, skin detection method using the method, method and device configured to perform encoding, transmission, reception, or decoding using data derived by the method, device configured to execute the method, computer program for executing the method, and computer readable storage medium having computer program stored therein
JP2005293555A (en) * 2004-03-10 2005-10-20 Seiko Epson Corp Identification of skin area in image
WO2011030756A1 (en) * 2009-09-14 2011-03-17 国立大学法人東京大学 Region segmentation image generation method, region segmentation image generation device, and computer program
JP2011060227A (en) * 2009-09-14 2011-03-24 Univ Of Tokyo Method and device for generating region segmentation image, and computer program

Also Published As

Publication number Publication date
JP3055423B2 (en) 2000-06-26

Similar Documents

Publication Publication Date Title
US5577175A (en) 3-dimensional animation generating apparatus and a method for generating a 3-dimensional animation
US8861800B2 (en) Rapid 3D face reconstruction from a 2D image and methods using such rapid 3D face reconstruction
US7952577B2 (en) Automatic 3D modeling system and method
JPH03218581A (en) Picture segmentation method
US20050093875A1 (en) Synthesis of progressively-variant textures and application to arbitrary surfaces
CN112184585A (en) Image completion method and system based on semantic edge fusion
JPS6055475A (en) Extracting device of boundary line
JP2006332785A (en) Image complement apparatus and image complement method, and program
US6760032B1 (en) Hardware-implemented cellular automata system and method
JPH10164370A (en) Method and device for interpolating gradation of image, image filtering method and image filter
JP3055423B2 (en) Area dividing device
JPH0830787A (en) Image area dividing method and image area integrating method
Lin et al. A genetic algorithm approach to Chinese handwriting normalization
US6429866B1 (en) Three-dimensional graphics drawing apparatus calculating tone of pixel based on tones of pixels at prescribed intervals, method thereof and medium recorded with program therefor
CN108268533A (en) A kind of Image Feature Matching method for image retrieval
JPH10124680A (en) Device for generating image and method therefor
JP2000306118A (en) Tree texture generating device
JPH05233826A (en) Method and device for changing lighting direction of image and method and device for synthesizing images
KR102374141B1 (en) Costume region removal method for flexible virtual fitting image generation
JP3000774B2 (en) Image processing method
JPH1125266A (en) Method for converting picture and device therefor
JPH1049670A (en) Method for processing picture
JP3344791B2 (en) Line segment extraction method
JP2962548B2 (en) Moving object region division method
CN117437118A (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080414

Year of fee payment: 8

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090414

Year of fee payment: 9

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100414

Year of fee payment: 10

LAPS Cancellation because of no payment of annual fees