JP3809210B2 - Image extraction device - Google Patents

Image extraction device Download PDF

Info

Publication number
JP3809210B2
JP3809210B2 JP34388895A JP34388895A JP3809210B2 JP 3809210 B2 JP3809210 B2 JP 3809210B2 JP 34388895 A JP34388895 A JP 34388895A JP 34388895 A JP34388895 A JP 34388895A JP 3809210 B2 JP3809210 B2 JP 3809210B2
Authority
JP
Japan
Prior art keywords
contour
image
region
evaluation function
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP34388895A
Other languages
Japanese (ja)
Other versions
JPH09185719A (en
Inventor
優和 真継
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to JP34388895A priority Critical patent/JP3809210B2/en
Publication of JPH09185719A publication Critical patent/JPH09185719A/en
Application granted granted Critical
Publication of JP3809210B2 publication Critical patent/JP3809210B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Color Image Communication Systems (AREA)
  • Image Analysis (AREA)
  • Facsimile Image Signal Circuits (AREA)

Description

【0001】
【発明の属する技術分野】
本発明は、任意の背景から特定対象画像の抽出または外形輪郭線の抽出を行うための画像抽出装置関する。
【0002】
【従来の技術】
従来、画像からの対象物の外輪郭線を抽出する技術の一つにいわゆる動的輪郭法(M.Kass et al., “Snakes: Active Contour Models," International Journal of Computer Vision, vol. 1, pp. 321-331, 1978) が知られる。この手法においては、一例として対象物を包囲するように適切に設定された初期輪郭が移動、変形して最終的に対象物の外形に収束する。この動的輪郭法には次のような処理が行われる。即ち、各点の座標を記述するためのパラメータsを用いて表される輪郭線v(s)=(x(s),y(s))に対し、評価関数
【0003】
【数1】

Figure 0003809210
【0004】
を最小にする輪郭線形状v(s)を求める。ここに
【0005】
【数2】
Figure 0003809210
【0006】
I{v(s)}はv(s)上の輝度レベルを表し、α(s)、β(s)、w0 はユーザが適宜定める。このように輪郭線に関して定めた上記の評価関数の最小化により対象の輪郭線を求める手法(動的輪郭法)において、初期輪郭の自動設定法としては、特開平6−251148号公報、特開平6−282652号公報に記載されたものがある。また輪郭収束過程の安定化法としては、特開平5−12443号公報、特開平6−138137号公報、特開平6−282649号公報に記載されたものなどがある。また輝度レベル以外の情報を用いた動的輪郭法以外の方法としては、色相画像の局部分散値に基づくエッジ抽出法(特開平5−181969号公報)などがある。
【0007】
【発明が解決しようとする課題】
しかしながら上述した動的輪郭法においては、評価関数の画像に依存する項(例えば、上式中のE0 )が輝度レベルのみに依存するために、背景画像と対象画像の輝度レベルが接近している領域では、収束後の輪郭形状は背景画像に影響されて本来の形状からずれてしまう。従って、輪郭抽出結果が対象自身の影や照明条件などにも影響を受け易く、また任意の背景に適用することが困難であった。特に輪郭内に存在する複雑なテクスチャパターンと背景中のテクスチャとの識別は輝度レベルの2段偏微分(E0 )では一般的に不可能であり、結果的に任意背景中の対象物画像の輪郭への安定収束は困難を極めるものであった。また上記特開平5−181969号公報によるエッジ抽出方法は、ノイズの影響を本質的に避け難く、またエッジ検出分解能が分割された局所領域のサイズによって決まるという問題点があった。
【0008】
そこで、本発明の第1の目的は、画像中の色情報を使って動的輪郭の対象上への収束精度を向上し、かつ収束結果が輝度レベルの変動または背景画像の輝度分布の影響を受け難くすることである。さらに、モデル画像を用いた初期輪郭の自動設定を可能とすることである。
本発明の第2の目的は、画像中の色成分を評価関数に組み込むことにより、輝度レベルの変動要因の影響を抑制することである。
本発明の第3の目的は、色成分を含むモデル輪郭線画像データを与える自動輪郭線抽出を可能にすることである。
【0009】
本発明の第4の目的は、動的輪郭領域法において、色成分情報と輝度レベル情報との分離により輪郭収束精度と速度の向上を可能にすることである。
本発明の第5の目的は、色成分を用いた動的輪郭領域法において、輝度レベルの変動、陰影の影響を受け難くすることである。
本発明の第6の目的は、モデル輪郭線上のサンプリング点位置を予め適切に設定(形状の急峻度に応じて間隔を粗密化するなど)可能とし、収束精度、収束速度の向上を実現することである。
本発明の第7の目的は、少ない演算量で第1の目的を達成することである。
【0010】
【課題を解決するための手段】
本発明に係る画像抽出装置は、例えば、入力された検査画像上に初期の輪郭領域を設定する設定手段と、色成分をパラメータとする項を含む第1の評価関数の算出結果が所定の収束条件を満たすまで上記輪郭領域の更新を行う第1の更新処理と、上記第1の更新処理によって得られた結果と、輝度成分をパラメータとする項を含む第2の評価関数とを用いて上記第2の評価関数の算出結果が所定の収束条件を満たすまで上記輪郭領域の更新を行う第2の更新処理とを行う更新手段とを有することを特徴とする。
【0018】
【発明の実施の形態】
図1は第1の実施の形態による要部構成図である。1は検査画像を入力する画像入力部であり、例えば撮像装置が用いられる。2は標準画像を格納する画像データベース、3は初期輪郭領域抽出部、4は初期輪郭領域を検査画像領域上に設定する初期輪郭領域設定部、5は輪郭近傍領域更新部であり、後述する評価関数に基づいて輪郭上の節点位置を更新し、かつ隣接する領域上の各点の位置も節点位置に基づいて更新する。6は輪郭内画像を出力する輪郭内領域画像出力部であり、背景から分離された被写体画像が出力される。また7は画像入力部1からの画像を一時的に保持するための画像記憶部、8は輪郭線抽出対象用の標準画像選択および初期輪郭領域抽出および初期輪郭領域の検査画像上での設定用パラメータ設定などを行うためのデータ入力用の端末を示す。
【0019】
初期輪郭領域抽出部3は、画像データベース2などの記憶された特定対象物の標準カラー画像から本実施の形態における動的輪郭領域処理に必要な初期画像データを抽出するためのものである。外輪郭線抽出部31は、標準画像データ(無地の背景の特定被写体画像)からその対象の最も外側の外輪郭線(シルエット画像の輪郭線に相当)を抽出する。外輪郭線内近傍領域抽出部32は、上記外輪郭線に内接する対象物の近傍領域画像を抽出するもので、抽出した領域に輪郭線内の対象画像のテクスチャ情報または色成分情報を表すのに最小限必要な画像データを含んでいればよい。典型的には、外輪郭線上の各点から内向き法線方向に所定幅(例えば、10数画素程度または対象画像サイズの1割程度)で対象画像の部分領域の抽出を行う。近傍領域の幅は画像上の点に応じて適宜可変としてもよい。輪郭近傍領域データ記憶部33は抽出された領域の画像データ(輝度、色差など)および領域内各点の座標データを記憶する。尚、この輪郭近傍領域データは予め画像データベース2中に作成、保存しておいてもよい。
【0020】
初期輪郭領域設定部4は、抽出された初期輪郭領域を適切なサイズにスケーリングし、かつ検査画像上の適切な位置に設定するためのものである。この際、サイズは対応する対象画像のサイズより若干大きめに設定し、中心位置は対象画像領域の重心または真の輪郭線と初期輪郭上の各点との距離の和が概ね最小となるように設定することが望ましい。但し中心位置、サイズともに設定誤差が実際の対象画像サイズの1割程度のオーダであれば問題なく、特に精密な設定を要するものではない。
【0021】
図2(a)に入力画像(運動靴)を示し、図2(b)に標準画像から抽出した輪郭線とその近傍領域とを含む標準輪郭領域を示す。また図2(c)に上記入力画像と設定された初期輪郭領域の位置・サイズとの関係を示す。同図(b)の標準輪郭領域の輝度分布は入力画像の抽出対象に近くなるように予め設定されていることはいうまでもない。但し同図(b)に示す程度に詳細なテクスチャ構造を必ずしも標準輪郭領域に含まなくてもよく、解像度を落とした分布を用いてもよい。
【0022】
輪郭近傍領域更新部5は、輪郭線を含む輪郭領域の各点座標値を更新するもので、輪郭領域内の輝度レベルデータは、基本的には標準画像データから抽出された初期輪郭領域の画像データであり、不変であることに注意されたい。
【0023】
図3に輪郭近傍領域更新部5における処理フローを示す。
輪郭領域内の点の座標の更新は、算出部51、更新部52、53により以下のようにして行う。まず評価関数の重み係数を初期設定し(S31)、従来の動的輪郭法に従って輪郭線上の各サンプリング点をその近傍画素(通常8画素近傍)に移動する(S32)。輪郭近傍領域内の点については、最近傍輪郭点の更新位置に応じた座標変換によって更新位置を決める。例えば、輪郭線全体の重心位置をOとすると、領域内各点に対して最も近い輪郭線上の点と重心Oとの距離に応じて更新位置の座標を重心からの相対座標として縮尺して決めればよい。次に、次式(4)で与えられる評価関数の算出(S34)、最小値を与える更新位置の選択(S35)、更新前後の輪郭線上の点の位置の平均的な変化量などに基づく収束度の評価(収束判定、S36)を行い、収束条件を満たさない場合にはさらに移動変形処理(S32以下の処理)を行う。その際、評価関数の重み係数を適宜変更(S37)してもよい。上記評価関数としては
【0024】
【数3】
Figure 0003809210
【0025】
を用い、上記評価関数値が最小となるように収束した輪郭線形状v(s)を求める。式(4)の第一項の線積分は従来例と同様に輪郭線に沿った積分値で与えられる評価関数であればよく、必ずしも前記式(1)、(2)、(3)で与えられるものに限定されない。第二項は前述の輪郭線を含む近傍領域Rにおける関数(E2 ;以下の式で定義)の積分項である。尚、w0 、w1 は定数を示し、また本実施の形態ではE0 、E1 は簡単のために従来例と同じ式で与えられるものとする。またE2 (x,y)は輪郭近傍領域中の点(x,y)における初期輪郭領域内の標準画像輝度値Im (x,y)と検査画像の輝度I(x,y)の差異を表す関数であり、例えば、
【0026】
【数4】
Figure 0003809210
【0027】
を用いればよい。以下、添え字mを有するパラメータは標準画像から抽出された輪郭近傍領域のデータを示すものとする。
【0028】
図4は本実施の形態による動作を示すもので、S41で画像入力部1より検査画像を入力し、S42で画像データベース2よりモデル画像入力した後、S43、S44で初期輪郭領域抽出及び初期輪郭領域のサイズの設定を行う。そしてS45、S46、S47により前述のようにして評価関数の算出及び輪郭領域の更新を収束条件を満たすまで行い、収束したらS48でその輪郭線画像あるいは輪郭線内の画像データを出力部6より出力する。
【0029】
次に第2の実施の形態について説明する。
図5は本実施の形態で用いる標準輪郭領域の例を示すもので、人体の上半身の一部を示す。このように初期輪郭領域設定に用いられ標準輪郭領域の形状を単純な形状パターン(形状プリミティブ)の組み合わせモデルで表してもよい。また同図に示すように輪郭線上の一部近傍領域のみに評価関数の色成分情報を含む輪郭領域を設定してもよい。要所のみに色情報を埋め込むことにより、入力画像と標準画像との差異による収束精度の劣化を抑制することができる。尚、色情報伊以外にテクスチャパターンの輪郭近傍上への部分的な付与を同時に行ってもよい。
【0030】
本実施の形態では、評価関数に色成分項と輝度成分項とを重み付け加算することにより、対象画像の外輪郭線および輪郭内の画像抽出を行う。評価関数の例としては、式(4)を用い、輪郭領域評価項であるE2 (x,y)を
【0031】
【数5】
Figure 0003809210
【0032】
のように与える。ここに、H(x,y)は輝度値で正規化した色相ベクトル、γI 、γH は重み係数を示す。尚、これらの値は収束度(更新前後の輪郭線上の点の位置変化量の平均など)の値に応じて可変としてもよく、例えば背景と対象物の平均的な色相が異なる場合は、初めにγH の値を相対的に大に設定し、収束度が向上する(更新前後の輪郭の移動または変形量が小となる)につれて(或いは所定閾値より小のとき)色成分項の重みを低く(γを小)すればよい。これにより、背景画像と対象画像の色成分の違いを反映した信頼性の高い動的輪郭抽出が可能となる。色成分を考慮した画像に固有な評価関数項(輝度勾配および色成分勾配の加重和)の一例としては、各点の色成分(R,G,B)で表すと、
【0033】
【数6】
Figure 0003809210
【0034】
を用いてもよい。また色相、彩度、明度に変換した後に同様にして画像データの勾配を評価してもよい。尚、勾配を表す微分演算は上記したような2階微分に限定されるものではない。
【0035】
また、色成分項のみで動的輪郭領域処理(γI =0)を行った後、その収束結果(輪郭線および輪郭近傍領域データ)を初期値として輝度レベル項を含めた動的輪郭領域処理(γI,H >0)を行ってもよい。これにより対象画像領域とそれに隣接する背景画像領域との平均的な色相差異が顕著な場合、輪郭線の収束を早めることができる。
【0036】
次に第3の実施の形態について説明する。
本実施の形態では、初期輪郭および動的輪郭の点列データに輝度および色成分を加えた動的輪郭処理を行い、輪郭線の近傍領域処理は行わない。即ち、上記各実施の形態のように評価関数に輪郭線に隣接する近傍領域に関する項を含まない。図6にシステム構成例を示す。図中、初期輪郭抽出部3は標準画像のデータベースからの外輪郭線抽出部31xとその各点上の画像データ抽出部34、および記憶部33xから成る。画像データ抽出部34は標準画像データから外輪郭線を抽出する際、対象画像上の輪郭線から輝度、色成分データ(色相、彩度に変換してもよい)および各座標値を抽出する。本実施の形態における評価関数としては、
【0037】
【数7】
Figure 0003809210
【0038】
ここにE2 (s)は輪郭線上の画像データ(I(s):輝度、H(s):色相)の標準画像データとの差異を表す項であり、上記関数に限定されるものではない。輪郭更新部5は輪郭線位置の更新部52に加えて重み係数w0 、w1 、γI 、γH の更新部55を有する。これにより、第2の実施の形態と同様に、背景と対象の絵柄、色成分特性、収束度に応じて重み係数を段階的に変更し、輪郭の収束精度と収束速度を向上させることができる。
【0039】
次に第4の実施の形態について説明する。
本実施の形態では、第一のステップとして従来の動的輪郭処理による輪郭抽出を行い、第二のステップとして収束した輪郭線とほぼ同じサイズ、ほぼ同じ重心位置の初期輪郭領域データを検査画像上に設定し、前記実施の形態と同様の動的輪郭領域処理による輪郭線抽出を行う。第一ステップで得られる輪郭のサイズは、例えばそれに外接する短形の縦横サイズ、対角線長などで代表すればよい。尚、初期輪郭領域データについては前記実施の形態と同様に標準画像データベースから抽出される。図7に本実施の形態による処理フローを示す。
【0040】
【発明の効果】
発明によれば、背景と対象の色成分の違いを反映した輪郭抽出を実現することができる。
【図面の簡単な説明】
【図1】本発明の第1の実施の形態を示すブロック図である。
【図2】入力画像、標準輪郭領域、初期輪郭領域を示す構成図である。
【図3】輪郭近傍領域更新部の処理を示すフローチャートである。
【図4】第1の実施の形態の動作を示すフローチャートである。
【図5】第2の実施の形態を説明するための構成図である。
【図6】第3の実施の形態を示すブロック図である。
【図7】第4の実施の形態の動作を示すフローチャートである。
【符号の説明】
1 画像入力部
2 画像データベース
3 初期輪郭領域抽出部
4 初期輪郭領域設定部
5 輪郭近傍領域更新部
6 輪郭内領域画像出力部[0001]
BACKGROUND OF THE INVENTION
The present invention relates to an image extracting apparatus for extracting or the extraction of the contour of the specific object image from an arbitrary background.
[0002]
[Prior art]
Conventionally, a so-called dynamic contour method (M. Kass et al., “Snakes: Active Contour Models,” International Journal of Computer Vision, vol. 1, pp. 321-331, 1978). In this method, as an example, an initial contour appropriately set so as to surround the object is moved and deformed, and finally converges to the outer shape of the object. In this dynamic contour method, the following processing is performed. That is, for the contour line v (s) = (x (s), y (s)) expressed using the parameter s for describing the coordinates of each point, the evaluation function
[Expression 1]
Figure 0003809210
[0004]
A contour line shape v (s) that minimizes is obtained. Here [0005]
[Expression 2]
Figure 0003809210
[0006]
I {v (s)} represents a luminance level on v (s), and α (s), β (s), and w 0 are appropriately determined by the user. In the method of obtaining the target contour line (dynamic contour method) by minimizing the evaluation function defined for the contour line in this way, as an automatic method for setting the initial contour, Japanese Patent Laid-Open Nos. 6-251148 and There exists what was described in 6-282651 gazette. As a method for stabilizing the contour convergence process, there are methods described in Japanese Patent Laid-Open Nos. 5-12443, 6-138137, and 6-282649. As a method other than the active contour method using information other than the luminance level, there is an edge extraction method based on a local dispersion value of a hue image (Japanese Patent Laid-Open No. 5-181969).
[0007]
[Problems to be solved by the invention]
However, in the above-described dynamic contour method, since the term (for example, E 0 in the above equation) that depends on the image of the evaluation function depends only on the luminance level, the luminance level of the background image and the target image approaches each other. In an area where the image is converged, the contour shape after convergence is influenced by the background image and deviates from the original shape. Therefore, the contour extraction result is easily affected by the shadow of the object itself, illumination conditions, and the like, and it is difficult to apply the result to an arbitrary background. In particular, it is generally impossible to discriminate between a complex texture pattern existing in the contour and a texture in the background by the two-level partial differential (E 0 ) of the luminance level. Stable convergence to the contour was extremely difficult. Further, the edge extraction method disclosed in Japanese Patent Laid-Open No. 5-181969 has a problem that it is essentially difficult to avoid the influence of noise, and the edge detection resolution is determined by the size of the divided local region.
[0008]
Therefore, the first object of the present invention is to improve the accuracy of convergence of the active contour on the object using the color information in the image, and the convergence result is affected by the fluctuation of the luminance level or the influence of the luminance distribution of the background image. It is to make it difficult to receive. Furthermore, it is possible to automatically set an initial contour using a model image.
The second object of the present invention is to suppress the influence of the luminance level variation factor by incorporating the color component in the image into the evaluation function.
A third object of the present invention is to enable automatic contour extraction that provides model contour image data including color components.
[0009]
A fourth object of the present invention is to enable improvement in contour convergence accuracy and speed by separating color component information and luminance level information in the dynamic contour region method.
A fifth object of the present invention is to make it less susceptible to luminance level fluctuations and shadows in the active contour region method using color components.
The sixth object of the present invention is to enable the sampling point positions on the model contour to be appropriately set in advance (such as to increase or decrease the interval according to the steepness of the shape), and to improve the convergence accuracy and the convergence speed. It is.
The seventh object of the present invention is to achieve the first object with a small amount of calculation.
[0010]
[Means for Solving the Problems]
The image extraction apparatus according to the present invention includes, for example, a setting unit that sets an initial contour region on an input inspection image and a calculation result of a first evaluation function including a term having a color component as a parameter. Using the first update process for updating the contour region until a condition is satisfied, the result obtained by the first update process, and the second evaluation function including a term having a luminance component as a parameter Update means for performing a second update process for updating the contour region until a calculation result of the second evaluation function satisfies a predetermined convergence condition .
[0018]
DETAILED DESCRIPTION OF THE INVENTION
FIG. 1 is a configuration diagram of a main part according to the first embodiment. Reference numeral 1 denotes an image input unit for inputting an inspection image. For example, an imaging device is used. 2 is an image database for storing standard images, 3 is an initial contour region extraction unit, 4 is an initial contour region setting unit for setting the initial contour region on the inspection image region, and 5 is a contour neighborhood region updating unit, which will be described later. The node position on the contour is updated based on the function, and the position of each point on the adjacent region is also updated based on the node position. Reference numeral 6 denotes an in-contour area image output unit for outputting an in-contour image, which outputs a subject image separated from the background. 7 is an image storage unit for temporarily holding an image from the image input unit 1, and 8 is for selecting a standard image for extracting a contour line, initial contour region extraction, and setting an initial contour region on an inspection image. A data input terminal for parameter setting and the like is shown.
[0019]
The initial contour region extraction unit 3 is for extracting initial image data necessary for the dynamic contour region processing in the present embodiment from the stored standard color image of the specific object such as the image database 2. The outer contour line extraction unit 31 extracts the outermost outer contour line of the object (corresponding to the contour line of the silhouette image) from the standard image data (a specific subject image with a plain background). The outer contour inner region extraction unit 32 extracts a region image of an object inscribed in the outer contour, and represents the texture information or color component information of the target image in the contour in the extracted region. Need to contain the minimum necessary image data. Typically, a partial region of the target image is extracted with a predetermined width (for example, about several tens of pixels or about 10% of the target image size) from each point on the outer contour line in the inward normal direction. The width of the neighborhood region may be appropriately changed according to the point on the image. The contour vicinity region data storage unit 33 stores image data (luminance, color difference, etc.) of the extracted region and coordinate data of each point in the region. The contour vicinity region data may be created and stored in the image database 2 in advance.
[0020]
The initial contour region setting unit 4 is for scaling the extracted initial contour region to an appropriate size and setting it to an appropriate position on the inspection image. At this time, the size is set slightly larger than the size of the corresponding target image, and the center position is set so that the sum of the distance between the center of gravity of the target image region or the true contour line and each point on the initial contour is substantially minimized. It is desirable to set. However, there is no problem if the setting error is on the order of 10% of the actual target image size for both the center position and size, and no particularly precise setting is required.
[0021]
FIG. 2A shows an input image (athletic shoes), and FIG. 2B shows a standard contour region including a contour line extracted from the standard image and its neighboring region. FIG. 2C shows the relationship between the input image and the set position and size of the initial contour region. It goes without saying that the luminance distribution of the standard contour region in FIG. 4B is set in advance so as to be close to the input image extraction target. However, a detailed texture structure as shown in FIG. 5B does not necessarily need to be included in the standard contour region, and a distribution with a reduced resolution may be used.
[0022]
The contour neighborhood region update unit 5 updates each point coordinate value of the contour region including the contour line. The brightness level data in the contour region is basically the image of the initial contour region extracted from the standard image data. Note that the data is immutable.
[0023]
FIG. 3 shows a processing flow in the contour vicinity region update unit 5.
The coordinates of the points in the contour region are updated by the calculation unit 51 and the update units 52 and 53 as follows. First, the weighting coefficient of the evaluation function is initialized (S31), and each sampling point on the contour line is moved to its neighboring pixel (usually eight pixel neighborhood) according to the conventional dynamic contour method (S32). For points in the contour vicinity region, the update position is determined by coordinate conversion according to the update position of the nearest contour point. For example, if the center of gravity position of the entire contour line is O, the coordinates of the update position are scaled as relative coordinates from the center of gravity according to the distance between the center point O and the point on the contour line closest to each point in the region. That's fine. Next, convergence based on calculation of an evaluation function given by the following equation (4) (S34), selection of an update position giving a minimum value (S35), an average amount of change in the position of a point on the contour before and after the update, and the like. The degree is evaluated (convergence determination, S36), and if the convergence condition is not satisfied, further movement deformation processing (processing after S32) is performed. At that time, the weighting coefficient of the evaluation function may be changed as appropriate (S37). As the evaluation function, [0024]
[Equation 3]
Figure 0003809210
[0025]
The contour shape v (s) converged so as to minimize the evaluation function value is obtained. The line integral of the first term of the equation (4) may be an evaluation function given by an integral value along the contour line as in the conventional example, and is always given by the equations (1), (2), and (3). It is not limited to what can be done. The second term is an integral term of a function (E 2 ; defined by the following equation) in the neighboring region R including the above-described contour line. Note that w 0 and w 1 are constants, and in the present embodiment, E 0 and E 1 are given by the same equations as in the conventional example for simplicity. E 2 (x, y) is the difference between the standard image luminance value I m (x, y) in the initial contour region at the point (x, y) in the contour neighboring region and the luminance I (x, y) of the inspection image. For example, for example,
[0026]
[Expression 4]
Figure 0003809210
[0027]
May be used. Hereinafter, it is assumed that the parameter having the subscript m indicates the data in the vicinity of the contour extracted from the standard image.
[0028]
FIG. 4 shows the operation according to the present embodiment. After an inspection image is input from the image input unit 1 in S41 and a model image is input from the image database 2 in S42, initial contour region extraction and initial contour are input in S43 and S44. Set the size of the area. Then, in S45, S46, and S47, the calculation of the evaluation function and the update of the contour region are performed until the convergence condition is satisfied as described above, and when the convergence is completed, the contour line image or the image data in the contour line is output from the output unit 6 in S48. To do.
[0029]
Next, a second embodiment will be described.
FIG. 5 shows an example of a standard contour region used in this embodiment, and shows a part of the upper body of a human body. As described above, the shape of the standard contour region used for the initial contour region setting may be represented by a combination model of simple shape patterns (shape primitives). Further, as shown in the figure, a contour region including color component information of the evaluation function may be set only in a part of the vicinity region on the contour line. By embedding color information only at the key points, it is possible to suppress degradation in convergence accuracy due to the difference between the input image and the standard image. In addition to the color information Italy, partial application to the vicinity of the contour of the texture pattern may be performed simultaneously.
[0030]
In the present embodiment, the outer contour line of the target image and the image within the contour are extracted by weighting and adding the color component term and the luminance component term to the evaluation function. As an example of the evaluation function, Expression (4) is used, and E 2 (x, y) that is an outline region evaluation term is expressed as follows.
[Equation 5]
Figure 0003809210
[0032]
Give like. Here, H (x, y) is a hue vector normalized by a luminance value, and γ I and γ H are weighting factors. These values may be variable according to the value of convergence (such as the average position change amount of points on the contour line before and after the update). For example, when the average hue of the background and the object is different, The value of γ H is set to a relatively large value, and the weight of the color component term is increased as the degree of convergence is improved (the movement or deformation amount of the contour before and after the update is small) (or smaller than a predetermined threshold). What is necessary is just to make it low (gamma is small). Thereby, it is possible to extract a dynamic outline with high reliability reflecting the difference between the color components of the background image and the target image. As an example of an evaluation function term (weighted sum of luminance gradient and color component gradient) unique to an image in consideration of color components, it can be expressed by color components (R, G, B) at each point.
[0033]
[Formula 6]
Figure 0003809210
[0034]
May be used. Further, the gradient of the image data may be evaluated in the same manner after conversion to hue, saturation, and brightness. Note that the differential operation representing the gradient is not limited to the second-order differential as described above.
[0035]
Further, after performing dynamic contour region processing (γ I = 0) only with the color component terms, the dynamic contour region processing including the brightness level term with the convergence result (contour line and contour neighboring region data) as an initial value. (Γ I, H > 0) may be performed. As a result, when the average hue difference between the target image area and the background image area adjacent thereto is significant, the convergence of the contour line can be accelerated.
[0036]
Next, a third embodiment will be described.
In this embodiment, dynamic contour processing is performed by adding luminance and color components to the initial contour and dynamic contour point sequence data, and no contour neighborhood processing is performed. That is, as in each of the above embodiments, the evaluation function does not include a term related to a neighboring region adjacent to the contour line. FIG. 6 shows a system configuration example. In the figure, the initial contour extraction unit 3 includes an outer contour line extraction unit 31x from a standard image database, an image data extraction unit 34 on each point, and a storage unit 33x. When the outer contour line is extracted from the standard image data, the image data extraction unit 34 extracts luminance, color component data (which may be converted into hue and saturation), and coordinate values from the contour line on the target image. As an evaluation function in the present embodiment,
[0037]
[Expression 7]
Figure 0003809210
[0038]
Here, E 2 (s) is a term representing a difference from the standard image data of the image data (I (s): luminance, H (s): hue) on the contour line, and is not limited to the above function. . The contour updating unit 5 includes an updating unit 55 for weighting factors w 0 , w 1 , γ I , and γ H in addition to the contour position updating unit 52. As a result, as in the second embodiment, the weighting factor can be changed stepwise in accordance with the background and the target pattern, color component characteristics, and convergence, thereby improving the contour convergence accuracy and convergence speed. .
[0039]
Next, a fourth embodiment will be described.
In the present embodiment, contour extraction by conventional active contour processing is performed as the first step, and initial contour region data having substantially the same size and the same center of gravity as the converged contour line is displayed on the inspection image as the second step. And the contour line extraction is performed by the same dynamic contour region processing as in the above embodiment. The size of the contour obtained in the first step may be represented by, for example, a short vertical and horizontal size circumscribing it, a diagonal length, and the like. Note that the initial contour region data is extracted from the standard image database as in the above embodiment. FIG. 7 shows a processing flow according to this embodiment.
[0040]
【The invention's effect】
According to the present invention, contour extraction reflecting the difference between the background and the target color component can be realized.
[Brief description of the drawings]
FIG. 1 is a block diagram showing a first embodiment of the present invention.
FIG. 2 is a configuration diagram showing an input image, a standard contour region, and an initial contour region.
FIG. 3 is a flowchart showing processing of a contour vicinity region update unit.
FIG. 4 is a flowchart showing the operation of the first exemplary embodiment.
FIG. 5 is a configuration diagram for explaining a second embodiment;
FIG. 6 is a block diagram showing a third embodiment.
FIG. 7 is a flowchart showing the operation of the fourth embodiment.
[Explanation of symbols]
DESCRIPTION OF SYMBOLS 1 Image input part 2 Image database 3 Initial outline area extraction part 4 Initial outline area setting part 5 Contour vicinity area update part 6 Intra outline area image output part

Claims (2)

入力された検査画像上に初期の輪郭領域を設定する設定手段と、
色成分をパラメータとする項を含む第1の評価関数の算出結果が所定の収束条件を満たすまで上記輪郭領域の更新を行う第1の更新処理と、上記第1の更新処理によって得られた結果と、輝度成分をパラメータとする項を含む第2の評価関数とを用いて上記第2の評価関数の算出結果が所定の収束条件を満たすまで上記輪郭領域の更新を行う第2の更新処理とを行う更新手段とを有することを特徴とする画像抽出装置。
Setting means for setting an initial contour region on the input inspection image;
A first update process for updating the contour region until a calculation result of the first evaluation function including a term having a color component as a parameter satisfies a predetermined convergence condition, and a result obtained by the first update process And a second update process for updating the contour region until a calculation result of the second evaluation function satisfies a predetermined convergence condition using a second evaluation function including a term having a luminance component as a parameter ; An image extracting apparatus comprising: an updating means for performing
上記初期の輪郭領域は、標準画像の輪郭とこの輪郭に隣接する近傍領域とを含むものであることを特徴とする請求項に記載の画像抽出装置。2. The image extracting apparatus according to claim 1 , wherein the initial contour region includes a contour of a standard image and a neighboring region adjacent to the contour.
JP34388895A 1995-12-28 1995-12-28 Image extraction device Expired - Fee Related JP3809210B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP34388895A JP3809210B2 (en) 1995-12-28 1995-12-28 Image extraction device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP34388895A JP3809210B2 (en) 1995-12-28 1995-12-28 Image extraction device

Publications (2)

Publication Number Publication Date
JPH09185719A JPH09185719A (en) 1997-07-15
JP3809210B2 true JP3809210B2 (en) 2006-08-16

Family

ID=18365019

Family Applications (1)

Application Number Title Priority Date Filing Date
JP34388895A Expired - Fee Related JP3809210B2 (en) 1995-12-28 1995-12-28 Image extraction device

Country Status (1)

Country Link
JP (1) JP3809210B2 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000209425A (en) 1998-11-09 2000-07-28 Canon Inc Device and method for processing image and storage medium
US7130446B2 (en) * 2001-12-03 2006-10-31 Microsoft Corporation Automatic detection and tracking of multiple individuals using multiple cues
JP4701144B2 (en) * 2006-09-26 2011-06-15 富士通株式会社 Image processing apparatus, image processing method, and image processing program

Also Published As

Publication number Publication date
JPH09185719A (en) 1997-07-15

Similar Documents

Publication Publication Date Title
US6757444B2 (en) Image extraction apparatus and method
JP4813749B2 (en) How to segment video images based on basic objects
JP3534009B2 (en) Outline extraction method and apparatus
US20210045704A1 (en) Method for converting tone of chest x-ray image, storage medium, image tone conversion apparatus, server apparatus, and conversion method
JP4407985B2 (en) Image processing method and apparatus, and storage medium
US20090116731A1 (en) Method and system for detection of concha and intertragal notch point in 3D undetailed ear impressions
Sanchez et al. Statistical chromaticity models for lip tracking with B-splines
JP2002230549A (en) Image processing method and device
JP2002522836A (en) Still image generation method and apparatus
JP3809210B2 (en) Image extraction device
US5774578A (en) Apparatus for and method of utilizing density histograms for correcting objective images
JP3636936B2 (en) Grayscale image binarization method and recording medium recording grayscale image binarization program
JP2003216959A (en) Method, device and program for extracting outline
CN114677393B (en) Depth image processing method, depth image processing device, image pickup apparatus, conference system, and medium
KR100602739B1 (en) Semi-automatic field based image metamorphosis using recursive control-line matching
JP3078132B2 (en) Contour line extraction device
JP2775122B2 (en) Automatic contour extraction vectorization processing method of illustration data and processing device used for the method
JP4708866B2 (en) Lookup table creation device and method, and lookup table creation program
EP0959625A2 (en) Segmenting video images based on joint region and contour information
JP2004062505A (en) Image processor
JP3188899B2 (en) Image processing apparatus and method
JP2988097B2 (en) How to segment a grayscale image
JP3462960B2 (en) Image processing method
JP3392608B2 (en) Image processing apparatus and method
JP3972546B2 (en) Image processing apparatus and image processing method

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20050520

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20050524

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20050725

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20051206

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20060206

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20060509

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20060522

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100526

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100526

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110526

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120526

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120526

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130526

Year of fee payment: 7

LAPS Cancellation because of no payment of annual fees