JP4544891B2 - Image processing method and program for line extraction, line concentration image filter - Google Patents

Image processing method and program for line extraction, line concentration image filter Download PDF

Info

Publication number
JP4544891B2
JP4544891B2 JP2004097420A JP2004097420A JP4544891B2 JP 4544891 B2 JP4544891 B2 JP 4544891B2 JP 2004097420 A JP2004097420 A JP 2004097420A JP 2004097420 A JP2004097420 A JP 2004097420A JP 4544891 B2 JP4544891 B2 JP 4544891B2
Authority
JP
Japan
Prior art keywords
line
concentration
degree
image
search
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2004097420A
Other languages
Japanese (ja)
Other versions
JP2005284697A (en
Inventor
幸靖 吉永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to JP2004097420A priority Critical patent/JP4544891B2/en
Publication of JP2005284697A publication Critical patent/JP2005284697A/en
Application granted granted Critical
Publication of JP4544891B2 publication Critical patent/JP4544891B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Description

本発明は、画像の中から線情報だけを高速に取り出すことができる線抽出のための画像処理方法と、これを実行するプログラム、及びこれをコンピュータに搭載して線抽出の画像処理を行う線集中度画像フィルタに関する。   The present invention relates to an image processing method for line extraction capable of extracting only line information from an image at high speed, a program for executing the same, and a line for performing image processing for line extraction by mounting the same on a computer. The present invention relates to a concentration image filter.

画像の中の物体を分析するとき、画像の形状を特定づける線が重要である。このような線の1つは物体と背景の間を分ける境界であり、もう1つは画像上細長い幅をもった通常の意味の線、すなわち線分である。後者は多くの場合かまぼこ状に盛り上がった細長い物体の画像を示し、場合によっては凹んだ溝等を示す画像である。   When analyzing an object in an image, lines that specify the shape of the image are important. One of these lines is a boundary that separates the object and the background, and the other is a normal meaning line having an elongated width on the image, that is, a line segment. The latter is often an image of a long and slender object that rises like a kamaboko, and in some cases an image showing a recessed groove or the like.

ところで画像上の境界は、線に本来幅がなく、物体と背景の陰影にコントラストがあるため、微分や差分をとることにより輝度の大きな変化が出現する箇所を検出することで検出することができる。また、盛り上がった細長い物体を示す線(線状凸領域)は、画像上で幅を持って出現するため、コントラストと線の幅を考慮し、平面の2次微分を利用する方法(ガウシアンラプラシアン)や曲率から尾根を推定する方法などを使って、輝度的な尾根線情報を抽出することで認識することが行われている。しかし、これらの方法はコントラストや線の幅、ノイズの影響を受けるため、解析対象の画像に大きく依存することになる。   By the way, the boundary on the image can be detected by detecting a portion where a large change in luminance appears by taking the differentiation or difference because the line originally has no width and the shadow of the object and the background has contrast. . Further, since a line (line-shaped convex region) indicating a raised elongated object appears with a width on the image, a method using a second-order differential of a plane in consideration of contrast and the line width (Gaussian Laplacian) Recognition is performed by extracting luminance ridge line information by using a method for estimating the ridge from the curvature and the like. However, since these methods are affected by contrast, line width, and noise, they greatly depend on the image to be analyzed.

例えば、X線写真やCT画像などの医療用画像は多くの医療情報を有しているが、対象部位と周囲の部位、背景との間のコントラストは全般的に黒っぽく多様な形態を有しており、しかも3次元的な形状、血管、繊維、神経等の構造が2次元的に表示されるため、陰影から病変部分を抽出するのは難しい。一例を挙げると、腫瘍の医療画像がある。画像に腫瘤影があったとき、悪性腫瘍と良性腫瘍の判別をする必要があるが、悪性腫瘍の場合は、多くの場合、腫瘤中央から放射状に延びる多様なコントラストと幅、パターンをもつスピキュラという白い線が発生している。しかし、このスピキュラを認識するには経験が必要であり、これを医療用画像から抽出するには、不必要な陰影の影響を断つ必要があった。上述の2次微分を使ってスピキュラを抽出しようとすると、コントラストの影響を取り除くために前処理や閾値の設定が必要になる。このような処理を行うための演算は複雑で時間がかかり、結果はノイズが多いものであった。   For example, medical images such as X-ray photographs and CT images have a lot of medical information, but the contrast between the target region, the surrounding region, and the background is generally dark and diverse. In addition, since a three-dimensional shape, a structure such as a blood vessel, a fiber, and a nerve is displayed two-dimensionally, it is difficult to extract a lesion from a shadow. An example is a medical image of a tumor. When there is a tumor shadow in the image, it is necessary to distinguish between a malignant tumor and a benign tumor. In many cases, a malignant tumor is called a spicule with various contrasts, widths and patterns extending radially from the center of the tumor. A white line is generated. However, experience is required to recognize this spicula, and in order to extract it from a medical image, it is necessary to cut off the influence of unnecessary shadows. When trying to extract a spicule using the above-mentioned second-order differentiation, pre-processing and threshold setting are required to remove the influence of contrast. The calculation for performing such processing is complicated and time consuming, and the result is noisy.

そこで、線集中ベクトル場モデルに基づく手法が提案された(例えば、非特許文献1、2参照)。これは線集中ベクトル場(Line-convergence vector field)として、線状領域のモデルを使ってコントラスト、線の幅等の影響を受けずに線情報を検出するものである。このモデルでは1次微分ベクトル(以下、輝度勾配ベクトル)の強度を無視し、ベクトルの方向分布のみに着目する。理想的な線状凸領域の場合、尾根線は輝度の最大値をもつ。このとき、輝度勾配ベクトルはこの尾根線と直交する向きを持ち、この尾根線の両側で対向して分布し、尾根線に向って線集中する。以下、このような輝度勾配ベクトルが集中する尾根線をベクトル集中線(Vector convergence line)という。図12(a)は線集中度の定義の説明図、図12(b)は無限長の線集中ベクトル場での線集中度の説明図である。ベクトル集中線V近傍の点Pの輝度勾配ベクトルが線集中しているときは、輝度勾配ベクトルはベクトル集中線Vに直交し、図12(a)の点線のベクトルになる。しかし、実際の輝度勾配ベクトルgは傾いていることが多く、両者の角度θとすると、この角度θが線集中する程度を表している。そこで、このベクトル集中線Vに対する線集中ベクトル場の評価値として線集中度Cを次のように定義する。すなわちC=cosθ(g≠0)、C=0(g=0)である。また、近傍点Pを含む線状領域Aにおいて、線集中度Cを領域内のすべての点Pの線集中度Cの平均値と定義する。従って、このベクトル集中線Vが真の線であれば、C=1となる。 Therefore, a method based on a line concentrated vector field model has been proposed (see, for example, Non-Patent Documents 1 and 2). This is a line-convergence vector field that detects line information using a model of a linear region without being affected by contrast, line width, and the like. In this model, the intensity of the first-order differential vector (hereinafter referred to as luminance gradient vector) is ignored, and only the direction distribution of the vector is focused. In the case of an ideal linear convex region, the ridge line has the maximum luminance value. At this time, the luminance gradient vector has a direction orthogonal to the ridge line, is distributed to face both sides of the ridge line, and is concentrated toward the ridge line. Hereinafter, such a ridge line on which the luminance gradient vector is concentrated is referred to as a vector convergence line. FIG. 12A is an explanatory diagram of the definition of the line concentration degree, and FIG. 12B is an explanatory diagram of the line concentration degree in an infinitely long line concentration vector field. When the brightness gradient vector of a point P of the vector concentration line V C vicinity is linear concentrations, the luminance gradient vector is orthogonal to the vector concentration line V C, it becomes dotted vector of FIG. 12 (a). However, the actual luminance gradient vector g P is often inclined, and when the angle θ P between the two is represented, it represents the degree to which the angle θ P is concentrated. Therefore, to define a line concentration degree C P as the evaluation value of the linear concentrations vector field for this vector concentrate line V C as follows. That is, C P = cos θ P (g P ≠ 0) and C P = 0 (g P = 0). Also defines the linear region A including the neighboring point P, a line concentration degree C A and the average value of the line concentration degree C P of all points P in the region. Therefore, if the vector concentration line V C is a true line, C A = 1.

非特許文献1においては、線集中ベクトル場の位置、方向、スケールの推定は次のように行われる。注目点P(x,y)を通り所定の傾きφをもつベクトル集中線Vを仮定し、このベクトル集中線Vと平行で距離w離れた直線を探索線Sとして仮定する。探索線S上の各点Qの線集中度の平均値、集中度C(w)を求める。このとき、C(w)=E[cosθ](Q∈S)となる。ここにE[]は平均を示す。線集中ベクトル場はベクトル集中線Vの両側に広がった領域となるので、この領域をAと仮定し、最大探索幅W、探査線Sから両側の領域端までの仮定の距離をW、Wとすると、領域Aのベクトル集中線Vに対する線集中度Cは領域内の探索線Sの線集中度の平均として、C=E[C(w)](S∈A)と表される。注目点P(x,y)が真のベクトル集中線Vを通り、傾きφ、距離W、Wが真の値に一致したときにCは最大値となる。従って、Cの値が最大となるような角度φ、距離W、Wを探索し、注目点P(x,y)におけるベクトル集中線Vを求め、このCの最大値を線集中度C(x,y)とすればよい。この線集中度C(x,y)を全幅適応領域線集中度(Full width adaptive region line convergence degree)(以下、FA線集中度)という。これはC(x,y)=maxφ[maxWr{maxWl}](W>0、W>0)と表される。このようにして線集中ベクトル場が決定される。 In Non-Patent Document 1, the position, direction, and scale of the line concentrated vector field are estimated as follows. Attention point P (x, y) assumes a vector concentration line V C having a street predetermined inclination φ of the assumed distance w away linearly parallel to the vector concentration line V C as a search line S w. The average value of the degree of line concentration of each point Q on the search line S w and the degree of concentration C S (w) are obtained. At this time, C S (w) = E [cos θ Q ] (QεS w ). Here, E [] indicates an average. Since linear concentrations vector field is the region that has spread to both sides of the vector concentration line V C, assuming the area is A, maximum search width is W, the distance assumptions from exploration line S w until both area end W r , W l , the line concentration degree C A with respect to the vector concentration line V C in the area A is the average of the line concentration degrees of the search lines S w in the area, and C A = E [C S (w)] (S w ΕA). When the attention point P (x, y) passes through the true vector concentration line V C and the inclination φ and the distances W r and W l coincide with the true values, C A becomes the maximum value. Therefore, C angle which the value becomes maximum of A phi, distance W r, to explore the W l, obtains a vector concentration line V C at the point of interest P (x, y), the line of the maximum value of this C A The degree of concentration may be C (x, y). This line concentration degree C (x, y) is referred to as a full width adaptive region line convergence degree (hereinafter referred to as FA line concentration degree). This is expressed as C (x, y) = max φ [max Wr {max Wl C A }] (W r > 0, W l > 0). In this way, the line concentration vector field is determined.

ところで、図12(b)に示す線集中ベクトルモデルでは、注目点P(x,y)上にベクトル集中線Vを仮定し、その周囲に仮定される領域Aの線集中ベクトル場を適応的に変化させ、FA集中度C(x,y)=maxφ[{maxWrAr+maxWlAl}/2]を求め、線の幅W、Wを推定する。ここに、CAr、CAlはr方向、l方向で推定された領域内の輝度勾配ベクトルの集中度の平均である。線集中ベクトル場は、盛り上がった細長い物体を示す線状凸領域に対して、ベクトル集中線Vが中央に存在し、幅2Wで無限長のものとなる。 Incidentally, in the linear concentrations vector model shown in FIG. 12 (b), target point P (x, y) assumes a vector concentration line V C on adaptive linear concentration vector field area A is assumed to around , FA concentration C (x, y) = max φ [{max Wr C Ar + max Wl C Al } / 2] is obtained, and the line widths W r and W l are estimated. Here, C Ar and C Al are the average of the concentration degrees of the luminance gradient vectors in the region estimated in the r direction and the l direction. Linear concentrations vector field with respect to the line-shaped protruding region showing the elongated object raised, vector concentrate line V C is present in the center, it becomes an infinite length in the width 2W.

このベクトル集中線Vから距離wにある測定点Q(x、y)の集中度を計算すると、測定点Qが存在する探索線Sからl方向(図面上左方向)でVよりl方向の境界までの領域では、勾配ベクトルはすべてr向き(図面上右向き)で、Vに対する方向と同一であり、集中度C(w)=1となる。探索線SからVまでの領域では、l向きでVに対する方向と反対方向であるから、線集中度C(w)=−1となり、探索線Sからr方向ではVに対する方向と同一であり、集中度C(w)=1となる。 The vector concentration line V C measurement point at a distance w from Q (x, y) when calculating the degree of concentration, the search line S w from l direction (on the left) with V C from l to measurement point Q is present In the region up to the boundary of the direction, all gradient vectors are in the r direction (rightward in the drawing), the same as the direction with respect to V C , and the degree of concentration C S (w) = 1. In the region from the search line S w to V C , the direction of l is the opposite direction to V C , so the degree of line concentration C S (w) = − 1, and in the r direction from the search line S w to V C. It is the same as the direction, and the concentration degree C S (w) = 1.

従って、探索線Sから距離Wの領域ではCAr=1となる。これに対し、l方向ではWの幅が異なるとCAlは変化する。すなわち0<W<wのときはCAl=−1、w<W<w+WのときはCAl=(W−2w)/W、w+W<WのときはCAl=(W−2w)/Wとなる。Wが境界と一致したときにはW=w+Wであるから、CAl=(W−w)/(W+w)となって最大となるので、測定点Qの位置によらず領域の推定ができる。また、領域全体では線集中度C(x,y)={1+(W−2w)/W}/2=1/2+(W−2w)/2W=1−w/Wである。ベクトル集中線V上で線集中度C(x,y)=1であり、また線集中ベクトル場内部でC(x,y)は0.5以上となることが分る。 Therefore, C Ar = 1 in the region of the distance W r from the search line S w. On the other hand, if the width of W l is different in the l direction, C Al changes. That 0 <W l <C Al = -1 when the w, w <W l <C when the w + W Al = (W l -2w) / W l, when the w + W <W l C Al = (W the -2w) / W l. Since W l = w + W when W l coincides with the boundary, C Al = (W−w) / (W + w) and becomes the maximum, so that the region can be estimated regardless of the position of the measurement point Q. Further, in the entire region, the line concentration degree C (x, y) = {1+ (W−2w) / W} / 2 = ½ + (W−2w) / 2W = 1−w / W. Vector concentrate line V C on a line concentration degree C (x, y) = 1 and also linear concentrations vector field inside C (x, y) is seen to be a 0.5 or higher.

しかし、以上の説明はノイズがない場合のことであり、実際の画像処理のようにノイズがあると、従来のFA域線集中度による方法ではベクトル集中線Vの検出は難しくなる。図13はGAWHT法(Gradient-Angle-Weighted Hough Transform)に基づくFA集中度による画像処理の線集中度算出のフローチャートである。図13に示すように、まず、探索領域を決定するための最大探索幅W、長さl、探索線の方向φ、φと輝度勾配ベクトルの方向θを離散化するための分割数n、原画像の輝度I(x,y)を入力し(step101)、測定点Q(x、y)の位置の初期化を行う(step102)。次に、入力された輝度I(x,y)から輝度勾配ベクトルを計算し(step103)、探索領域の中で近傍域を設定する(i,j)を仮定して、(i,j)の範囲を計算する(step104)。 However, the foregoing description is that in the absence of noise, when there is actual noise as the image processing, the method by the conventional FA area line concentration degree becomes difficult detection of vector concentrate line V C. FIG. 13 is a flowchart for calculating the line concentration of image processing based on the FA concentration based on the GAWHT method (Gradient-Angle-Weighted Hough Transform). As shown in FIG. 13, first, the maximum search width W and length l for determining the search region, the search line direction φ, φ and the luminance gradient vector direction θ are divided into n, the original number. The luminance I (x, y) of the image is input (step 101), and the position of the measurement point Q (x, y) is initialized (step 102). Next, a luminance gradient vector is calculated from the input luminance I (x, y) (step 103), and (i, j) of (i, j) is set assuming that a neighborhood region is set in the search region. The range is calculated (step 104).

続いて、この近傍の(i,j)の範囲内の評価線上の線集中度を計算する(step105)。ρは評価までの離散化された距離を示している。このhcr(ρ)、hcl(ρ)は、(i,j)の範囲内だけの部分直線をHough変換したものである。このようにして計算されたhcr(ρ)、hcl(ρ)に対して、r方向ではρ=0〜W、l方向ではρ=0〜Wと変化させて、hcr(ρ)、hcl(ρ)の総和をとり、これによってこのC(x、y)の最大値と、そのときのW、Wを求める(step106)。さらに、測定点Q(x、y)の位置を移動して画面上の線情報を抽出するものである(step107)。 Subsequently, the line concentration degree on the evaluation line within the range of (i, j) in the vicinity is calculated (step 105). ρ represents a discretized distance to the evaluation line . These h cr (ρ) and h cl (ρ) are Hough transforms of partial straight lines only within the range of (i, j). The h cr (ρ) and h cl (ρ) calculated in this way are changed to ρ = 0 to W r in the r direction and ρ = 0 to W l in the l direction, and h cr (ρ ), H cl (ρ), and the maximum value of C H (x, y), and W r and W l at that time are obtained (step 106). Further, line information on the screen is extracted by moving the position of the measurement point Q (x, y) (step 107).

以上の説明からも分るように、FA域線集中度による方法は、コントラストにむらがあっても線集中ベクトル場の幅にも影響を受けないが、膨大な計算時間を要し、ノイズが混入し易く、その影響は方向φにも線の幅にも出現するものであった。   As can be seen from the above explanation, the method based on the FA area line concentration degree is not affected by the width of the line concentration vector field even if the contrast is uneven, but it requires a huge amount of calculation time and noise. It was easy to mix, and the influence appeared in both the direction φ and the line width.

なお、非特許文献1,2と同様に、濃度勾配ベクトルを使った数値化(sinθを使用)を行い、そのヒストグラムの確率密度Piからエントロピーを計算し、エントロピーの最小値をもつ基準候補線を探索して輪郭を決定する放射線画像の照射野認識方法も提案されている(特許文献1参照)。しかし、この技術は面の境界を抽出するもので、線状凸領域の線情報だけを抽出するものではないし、線情報を高速に抽出する高速化を課題とするものでもない。 As in Non-Patent Documents 1 and 2, numerical value using a density gradient vector (using sin θ P ) is performed, entropy is calculated from the probability density Pi of the histogram, and a reference candidate line having a minimum value of entropy A method for recognizing an irradiation field of a radiographic image in which a contour is determined by searching for the image is also proposed (see Patent Document 1). However, this technique extracts the boundary of the surface and does not extract only the line information of the linear convex region, and does not have a problem of speeding up the extraction of the line information at high speed.

特開平10−275213号公報JP-A-10-275213 電子情報通信学会論文誌、J81−D−II巻、11号、1998年11月、電子情報通信学会、吉永幸靖外1名「輝度勾配ベクトルを用いたコントラストや幅に影響されない線領域の検出法」、p.2547−p.2555IEICE Transactions, J81-D-II, No.11, November 1998, IEICE, Yukiyoshi Yoshinaga 1 "Detection of line area not affected by contrast and width using luminance gradient vector" Method ", p. 2547-p. 2555 電子情報通信学会論文誌、J87−D−II巻、1号、2004年1月、電子情報通信学会、吉永幸靖外4名「輝度勾配ベクトル場モデルによる線の特徴抽出を用いたスピキュラ判別法」、p.146−p.153IEICE Transactions, Vol. 1, No. 1, January 2004, IEICE, Yukiyoshi Yoshinaga, et al., 4 “Spicular Discrimination Method Using Feature Extraction of Lines by Luminance Gradient Vector Field Model” ", P. 146-p. 153

以上説明したように、従来の2次微分や曲率から尾根を推定する方法などを使って、画像の中で盛り上がった細長い物体を示す線(線状凸領域)の尾根に相当する輝度のベクトル集中線Vを抽出する方法は、コントラストや線の幅、ノイズの影響を受けるため、解析対象の画像の画質に解析結果が大きく依存する。 As described above, the vector concentration of the luminance corresponding to the ridge of the line (linear convex region) indicating the elongated object raised in the image using the conventional method of estimating the ridge from the second derivative or the curvature, etc. method of extracting a line V C is the width of the contrast and the line, due to the influence of noise, the analysis results to the image quality of the image to be analyzed is highly dependent.

また、非特許文献1,2の線集中ベクトル場モデルに基づく線状領域の分析は、コントラストにも線集中ベクトル場の幅にも影響を受けない。しかし、ノイズの影響を受け、画質にむらの生じるものである。これは特許文献1の技術でも同様である。そして、非特許文献1,2の演算を行う場合、従来GAWHT法に基づく計測方法を利用しているが、上述したような膨大な計算を行ってFA線集中度と線の幅を計算する必要があり、実用上の大きな課題となっていた。従ってノイズが混入した場合には、せっかく膨大な計算を行ったにもかかわらず、画質が悪く、解析結果の信頼性に問題を残す場合もあった。   The analysis of the linear region based on the line concentration vector field model of Non-Patent Documents 1 and 2 is not affected by the contrast or the width of the line concentration vector field. However, the image quality is uneven due to the influence of noise. The same applies to the technique of Patent Document 1. And when performing the calculations of Non-Patent Documents 1 and 2, a measurement method based on the conventional GAWHT method is used, but it is necessary to calculate the FA line concentration and the line width by performing the enormous calculation as described above. There was a big problem in practical use. Therefore, when noise is mixed, there is a case where the image quality is poor and the reliability of the analysis result remains a problem even though enormous calculation is performed.

そこで、このような課題を解決するために本発明は、画像のコントラストや線の幅、ノイズ等の影響を受けずに画像の中の線情報を抽出する線抽出のための画像処理方法と、それを実行するプログラム、及び線集中度画像フィルタを提供することを目的とする。   Therefore, in order to solve such a problem, the present invention provides an image processing method for line extraction that extracts line information in an image without being affected by image contrast, line width, noise, and the like, It is an object of the present invention to provide a program for executing this and a line concentration degree image filter.

本発明は、画像上の複数の測定点に対して、各測定点を通る探索線の両翼に固定化された一定形状を有する計測用の近傍領域を用意するとともに、該近傍領域内に含まれる複数の近傍点で画像の輝度勾配ベクトルの向きを計測し、近傍点のそれぞれで輝度勾配ベクトルの向きと探索線の方向の差を評価する集中度を計算し、近傍領域内のすべての集中度から測定点に対する線集中度を計算し、該線集中度が極大になったとき探索線に沿って線情報がある旨の判定を行う線抽出のための画像処理方法であって、輝度勾配ベクトルの向きと探索線の方向をそれぞれ離散化するとともに、近傍領域を示すローカル座標上において予め探索線の方向ごとに輝度勾配ベクトルの向きと近傍領域内の画素の位置及び画素数とに基づいて各測定点で共用できる近傍点の集中度の基礎加算値を計算しておき、輝度勾配ベクトルの向きを計測したときに、探索線の方向ごとに基礎加算値の候補の中から1つを選んで近傍領域内で加算することにより基礎加算値に基づいて各測定点の線集中度を計算することを特徴とする。 The present invention provides a measurement vicinity area having a fixed shape fixed to both wings of a search line passing through each measurement point for a plurality of measurement points on an image, and is included in the vicinity area Measure the direction of the luminance gradient vector of the image at multiple neighboring points, calculate the degree of concentration that evaluates the difference between the direction of the luminance gradient vector and the direction of the search line at each of the neighboring points, and all the degree of concentration in the neighboring region An image processing method for line extraction that calculates a line concentration degree for a measurement point from a line and determines that there is line information along a search line when the line concentration level reaches a maximum. The direction of the search line and the direction of the search line are respectively discretized, and each of the directions based on the direction of the luminance gradient vector, the position of the pixel in the vicinity area, and the number of pixels for each direction of the search line on the local coordinates indicating the vicinity area. Can be shared at measurement points Leave computing the fundamental sum of concentration of the neighboring points, when measured orientation of the intensity gradient vectors, summed in neighboring region to choose one from the candidates of the basic addition value for each direction of the search line and calculates the linear concentrations of each measurement point based on the basic additional value by.

本発明の線集中度画像フィルタとプログラム、線抽出のための画像処理方法によれば、画像のコントラストや線の幅、ノイズ等の影響を受けずに画像の中の線情報を抽出することができる。   According to the line concentration image filter and program of the present invention, and the image processing method for line extraction, it is possible to extract line information in an image without being affected by the contrast of the image, the width of the line, noise, and the like. it can.

以上の課題を解決するために本発明の第1の形態は、画像上の複数の測定点に対して、各測定点を通る探索線の両翼に固定化された一定形状を有する計測用の近傍領域を用意するとともに、該近傍領域内に含まれる複数の近傍点で画像の輝度勾配ベクトルの向きを計測し、近傍点のそれぞれで輝度勾配ベクトルの向きと探索線の方向の差を評価する集中度を計算し、近傍領域内のすべての集中度から測定点に対する線集中度を計算し、該線集中度が極大になったとき探索線に沿って線情報がある旨の判定を行う線抽出のための画像処理方法であって、輝度勾配ベクトルの向きと探索線の方向をそれぞれ離散化するとともに、近傍領域を示すローカル座標上において予め探索線の方向ごとに輝度勾配ベクトルの向きと近傍領域内の画素の位置及び画素数とに基づいて各測定点で共用できる近傍点の集中度の基礎加算値を計算しておき、輝度勾配ベクトルの向きを計測したときに、探索線の方向ごとに基礎加算値の候補の中から1つを選んで近傍領域内で加算することにより基礎加算値に基づいて各測定点の線集中度を計算することを特徴とする線抽出のための画像処理方法であり、離散化と一定の形状を有する近傍領域での計算、さらには共用可能な基礎加算値の分離とその予備的な演算結果の蓄積により、各測定点での演算が高速化し、輝度勾配ベクトルの向きで計測するとともに一定の形状を有する近傍領域であるために、画像のコントラストや線の幅、ノイズ等の影響を受けずに画像の中の線情報を抽出することができる。 In order to solve the above-described problems, the first embodiment of the present invention provides a measurement neighborhood having a fixed shape fixed to both wings of a search line passing through each measurement point with respect to a plurality of measurement points on the image. Concentrate to prepare an area, measure the direction of the brightness gradient vector of the image at a plurality of neighboring points included in the neighboring area, and evaluate the difference between the direction of the brightness gradient vector and the direction of the search line at each of the neighboring points A line extraction that calculates the degree of line, calculates the line concentration for the measurement point from all the degrees of concentration in the neighborhood, and determines that there is line information along the search line when the line concentration reaches a maximum Image processing method for discriminating the direction of the luminance gradient vector and the direction of the search line, respectively , and the direction of the luminance gradient vector and the neighboring region for each direction of the search line in advance on the local coordinates indicating the neighboring region The position of the pixel in Leave computing the fundamental sum of the concentration level of the neighboring point that can be shared by each measurement point based on the number of pixels, when measuring the orientation of the intensity gradient vectors, the candidate of the basic addition value for each direction of the search line An image processing method for line extraction, characterized by calculating a line concentration degree at each measurement point based on a basic addition value by selecting one of them and adding them in a neighboring region, The calculation at each measurement point is accelerated and the measurement is performed in the direction of the luminance gradient vector by calculation in the vicinity area having a fixed shape, further separation of the basic addition value that can be shared and accumulation of the preliminary calculation results. At the same time, since it is a neighboring region having a certain shape, line information in the image can be extracted without being affected by the contrast of the image, the line width, noise, or the like.

本発明の第2の形態は、第1の形態に従属する形態であって、近傍領域が、探索線の両翼に沿って所定幅で長尺の固定化された矩形領域であることを特徴とする線抽出のための画像処理方法であり、計算量が少なく、演算が高速化できる。   The second form of the present invention is a form subordinate to the first form, characterized in that the neighboring region is a fixed rectangular region that is long and long with a predetermined width along both wings of the search line. This is an image processing method for line extraction, which requires a small amount of calculation and can speed up the calculation.

本発明の第3の形態は、第1または2の形態に従属する形態であって、近傍領域が、探索線の両翼でそれぞれ分離された2つの領域から構成されることを特徴とする線抽出のための画像処理方法であり、ノイズに対する耐性がきわめて強くなる。   A third aspect of the present invention is a form subordinate to the first or second form, wherein the neighborhood region is composed of two regions separated by both wings of the search line, respectively. Is an image processing method for noise and has extremely high resistance to noise.

本発明の第4の形態は、第1〜3のいずれかの形態に従属する形態であって、集中度c(t)が、c(t)=c(−t)、c(t+π)=−c(t)、c(0)=1、c(π/2)=0を満たし、0≦t≦πにおいて単調減少を示す関数によって計算されることを特徴とする線抽出のための画像処理方法であり、共用可能な基礎加算値を予め演算できるようにすることができる。   The fourth form of the present invention is a form dependent on any one of the first to third forms, and the degree of concentration c (t) is c (t) = c (−t), c (t + π) = An image for line extraction, characterized in that c (t), c (0) = 1, c (π / 2) = 0, and calculated by a function indicating monotonic decrease in 0 ≦ t ≦ π This is a processing method, and a basic addition value that can be shared can be calculated in advance.

本発明の第5の形態は、第1〜4のいずれかの形態に従属する形態であって、線集中度が0.5より大きい場合に、線情報がある旨の判定を行うことを特徴とする線抽出のための画像処理方法であり、線集中度が0.5以下の場合、面領域の境界の情報が含まれるが、他のノイズとともにこれを除去することができる。   A fifth aspect of the present invention is a form that is subordinate to any one of the first to fourth aspects, and is characterized in that it is determined that there is line information when the line concentration degree is greater than 0.5. When the line concentration degree is 0.5 or less, information on the boundary of the surface area is included, and this can be removed together with other noises.

本発明の第6の形態は、コンピュータを、コンピュータを、(1)画像上の複数の測定点に対して、各測定点を通る探索線の両翼に計測用の固定化された一定形状の近傍領域を設けるとともに、該近傍領域内に含まれる複数の近傍点の輝度勾配ベクトルの向きと探索線の方向をそれぞれ離散化し、近傍領域を示すローカル座標上において探索線の方向ごとに輝度勾配ベクトルの向きと近傍領域内の画素の位置及び画素数とに基づいて各測定点で共用できる近傍点の集中度の基礎加算値を計算し格納する基礎加算値演算手段、(2)探索線の方向が設定されたとき、各測定点において近傍点のそれぞれで画像の輝度勾配ベクトルの向きを計測し、基礎加算値の候補の中から向きに対応する基礎加算値を取り出し、近傍領域内で加算することにより基礎加算値に基づいて線集中度を計算し、各測定点において極大値を示した線集中度の中で線情報と判定できる線集中度及び/またはそのときの探索線の方向を出力する線集中度演算手段、(3)線集中度演算手段が演算した線集中度が所定のノイズ耐性値より大きい場合に線情報と判定する線抽出手段、として機能させるためのプログラムであり、離散化と一定の形状を有する近傍領域での計算、さらには共用可能な基礎加算値の分離とその予備的な演算結果の蓄積により、各測定点での演算が高速化し、輝度勾配ベクトルの向きで計測するとともに一定の形状を有する近傍領域であるために、画像のコントラストや線の幅、ノイズ等の影響を受けずに画像の中の線情報を抽出することができる。 According to a sixth aspect of the present invention, there is provided a computer, (1) a plurality of measurement points on an image, a fixed shape vicinity for measurement on both wings of a search line passing through each measurement point Rutotomoni provided regions, respectively discretized orientation and the search line of intensity gradient vectors of a plurality of neighboring points included in the near-neighbor region, the brightness gradient vector for each direction of the search line in the local coordinates representing the area near Basic addition value calculation means for calculating and storing a basic addition value of the degree of concentration of neighboring points that can be shared by each measurement point based on the orientation of the pixel and the position and number of pixels in the neighboring region , (2) Search line direction Is set, the direction of the brightness gradient vector of the image is measured at each of the neighboring points at each measurement point, and the basic addition value corresponding to the direction is extracted from the candidates for the basic addition value and added in the vicinity region. Especially Ri line concentration degree calculated based on the basic sum value, and outputs the direction of the search line when the line concentration degree and / or that it can be determined that the line information in the line concentration degree showing maximum values at each measurement point line concentration degree calculation means, a program of the order to function as a line extracting unit determines a line information is greater than (3) line concentration degree calculation means line concentration degree predetermined noise immunity value calculated, discrete Calculation in a neighboring area with a fixed shape, separation of basic addition values that can be shared, and accumulation of preliminary calculation results, the calculation at each measurement point is accelerated, and the direction of the luminance gradient vector In addition to measurement, since it is a neighboring region having a certain shape, line information in the image can be extracted without being affected by the contrast of the image, the width of the line, noise, and the like.

本発明の第7の形態は、第6の形態に従属する形態であって、ノイズ耐性値が0.5であることを特徴とするプログラムであり、線集中度が0.5以下の場合、面領域の境界の情報が含まれるが、他のノイズとともにこれを除去することができる。   The seventh aspect of the present invention is a program subordinate to the sixth aspect, characterized in that the noise tolerance value is 0.5, and when the line concentration degree is 0.5 or less, Information on the boundary of the surface area is included, but it can be removed along with other noises.

本発明の第8の形態は、第6または7の形態に従属する形態であって、基礎加算値演算手段が、近傍領域として矩形領域を設定することを特徴とするプログラムであり、計算量が少なく、演算が高速化できる。 An eighth aspect of the present invention is a program subordinate to the sixth or seventh aspect, wherein the basic addition value calculation means sets a rectangular area as a neighboring area , and the amount of calculation is There are few, and calculation can be sped up.

本発明の第9の形態は、(1)画像上の複数の測定点に対して、各測定点を通る探索線の両翼に計測用の固定化された一定形状の近傍領域を設けるとともに、該近傍領域内に含まれる複数の近傍点の輝度勾配ベクトルの向きと探索線の方向をそれぞれ離散化し、近傍領域を示すローカル座標上において探索線の方向ごとに輝度勾配ベクトルの向きと近傍領域内の画素の位置及び画素数とに基づいて各測定点で共用できる近傍点の集中度の基礎加算値を計算し格納する基礎加算値演算手段と、(2)探索線の方向が設定されたとき、各測定点において近傍点のそれぞれで画像の輝度勾配ベクトルの向きを計測し、基礎加算値の候補の中から向きに対応する基礎加算値を取り出し、近傍領域内で加算することにより基礎加算値に基づいて線集中度を計算し、各測定点において極大値を示した線集中度の中で線情報と判定できる線集中度及び/またはそのときの探索線の方向を出力する線集中度演算手段と、(3)線集中度演算手段が演算した線集中度が所定のノイズ耐性値より大きい場合に線情報と判定する線抽出手段と、を備えたことを特徴とする線集中度画像フィルタであり、離散化と一定の形状を有する近傍領域での計算、さらには共用可能な基礎加算値の分離とその予備的な演算結果の蓄積により、各測定点での演算が高速化し、輝度勾配ベクトルの向きで計測するとともに一定の形状を有する近傍領域であるために、画像のコントラストや線の幅、ノイズ等の影響を受けずに画像の中の線情報を抽出することができる。 The ninth form of the present invention is, (1) for a plurality of measurement points on the image, provided the area near the immobilized predetermined shape for measurement to the wings of the search line through the respective measuring points Rutotomoni, The direction of the brightness gradient vector and the direction of the search line of a plurality of neighboring points included in the neighborhood area are discretized, respectively, and the direction of the brightness gradient vector and the direction in the neighborhood area are determined for each direction of the search line on the local coordinates indicating the neighborhood area. Basic addition value calculation means for calculating and storing a basic addition value of the degree of concentration of neighboring points that can be shared by each measurement point based on the position and the number of pixels of (2) , and (2) when the direction of the search line is set , Measure the direction of the brightness gradient vector of the image at each of the neighboring points at each measurement point, extract the basic addition value corresponding to the direction from the candidates for the basic addition value, and add them in the neighborhood area to add the basic addition value line concentration on the basis of Was calculated, and the line concentration degree calculation means for outputting a direction of the search line when the line concentration degree and / or that it can be determined that the line information in the line concentration degree showing maximum values at each measurement point, (3) A line concentration degree image filter characterized by comprising: line extraction means for determining line information when the line concentration degree calculated by the line concentration degree calculation means is greater than a predetermined noise tolerance value, The calculation at each measurement point is accelerated and the measurement is performed in the direction of the luminance gradient vector by calculation in the vicinity area having a fixed shape, further separation of the basic addition value that can be shared and accumulation of the preliminary calculation results. At the same time, since it is a neighboring region having a certain shape, line information in the image can be extracted without being affected by the contrast of the image, the line width, noise, or the like.

本発明の実施例1の線抽出の画像処理について図1に基づいて説明をする。図1(a)は本発明の実施例1における実画像の実際の線の説明図、図1(b)は(a)の実際の線の輝度ベクトルの分布説明図、図2(a)は本発明の実施例1における線に対する理想的な輝度ベクトルの分布説明図、図2(b)は(a)の線領域の輝度ベクトル説明図である。   The line extraction image processing according to the first embodiment of the present invention will be described with reference to FIG. 1A is an explanatory diagram of an actual line of an actual image in the first embodiment of the present invention, FIG. 1B is an explanatory diagram of a luminance vector distribution of the actual line of FIG. 1A, and FIG. FIG. 2B is an explanatory diagram of ideal luminance vector distribution with respect to lines in Example 1 of the present invention, and FIG. 2B is an explanatory diagram of luminance vectors in the line area of FIG.

まず、本実施例1の線抽出は、探索線周りの狭幅固定の近傍領域を使い、探索線に対する集中度を計算してベクトル集中線を求めるものであり、この線抽出の原理について説明する。   First, the line extraction of the first embodiment uses a narrow fixed neighborhood around the search line and calculates the concentration level with respect to the search line to obtain a vector concentration line. The principle of this line extraction will be described. .

図1(a)に示すような画像から線情報を抽出する場合、通常、画像中に出現した線状凸領域の尾根線に沿って線の存在を認識する。図1(a)の画像の中で注目点Pを移動し、尾根線が抽出できれば、画像から線分が抽出できたことになる。図2(a)の右図は、このような線状凸領域の輝度と輝度勾配ベクトルの状態を示し、画像上に現れる理想的な線状凸領域のベクトルの状態を示している。これによれば、理想状態で、線状凸領域のエッジ付近から中央に向って一様な濃度変化が形成され、輝度勾配ベクトルは中央の尾根線に向ってすべて垂直に並んでいる。そこで本発明においては、輝度勾配ベクトルの方向が尾根線の両側から尾根線に直交するとき、線分の尾根線(以下、ベクトル集中線Vという)と判断する。   When line information is extracted from an image as shown in FIG. 1A, the presence of a line is usually recognized along the ridge line of the linear convex region that appears in the image. If the point of interest P is moved in the image of FIG. 1A and a ridge line can be extracted, a line segment can be extracted from the image. The right diagram in FIG. 2A shows the state of the luminance and the luminance gradient vector of such a linear convex region, and shows the state of an ideal linear convex region vector that appears on the image. According to this, in the ideal state, a uniform density change is formed from the vicinity of the edge of the linear convex region toward the center, and the luminance gradient vectors are all aligned vertically toward the central ridge line. Therefore, in the present invention, when the direction of the luminance gradient vector is orthogonal to the ridge line from both sides of the ridge line, it is determined that the line segment is a ridge line (hereinafter referred to as a vector concentration line V).

図2(a)の左図は、線状凸領域であると判断するための理想的な輝度勾配ベクトルのモデル分布である。注目点P(x,y)がベクトル集中線V上に存在する場合、ベクトル集中線Vの一方側の近傍領域r(近傍領域r)、他方側の近傍領域r(近傍領域r)のそれぞれで、輝度勾配ベクトルがベクトル集中線Vに向って垂直に集中する。ただ、このような理想的な近傍領域rが実際の画像に現れることは少なく、実際には、図1(b)のような近傍領域rとして出現し、輝度勾配ベクトルは近傍領域r内と近傍領域r内で概ね対向する方向であるが、ベクトル集中線Vに向ってそれぞれバラバラに線集中する。さらに、注目点P(x,y)がベクトル集中線V上にない場合は輝度勾配ベクトルはまったく別の方向に向い、これにノイズ等の不規則性が加わった方向を向く。 The left diagram in FIG. 2A is an ideal luminance gradient vector model distribution for determining a linear convex region. When the attention point P (x, y) exists on the vector concentration line V, the neighboring region r (neighboring region r 1 ) on one side of the vector concentration line V and the neighboring region r (neighboring region r 2 ) on the other side. In each case, the luminance gradient vector concentrates vertically toward the vector concentration line V. However, such an ideal neighboring region r rarely appears in an actual image, and actually appears as a neighboring region r as shown in FIG. 1B, and the luminance gradient vector is expressed as in the neighboring region r 1 . Although the directions are generally opposite to each other in the vicinity region r 2 , the lines are concentrated separately toward the vector concentration line V. Further, when the attention point P (x, y) is not on the vector concentration line V, the luminance gradient vector is directed in a completely different direction, and is directed in a direction to which irregularities such as noise are added.

そこで本発明においては、ベクトル集中線Vを求めるため、注目点P(x,y)を通る仮定のベクトル集中線V(以下、探索線S)を想定し、この探索線Sに対する近傍領域rと近傍領域r内のそれぞれの近傍点Q(x,y)のすべてで注目点P(x,y)に対する評価値、すなわち以下説明する線集中度C(x,y)を計算し、この線集中度C(x,y)がベクトル集中線Vを示す所定の値、1もしくは最大値を示したときに、この方向を線の方向とするものである。本発明のこの狭幅固定の近傍領域による線集中度C(x,y)を狭幅固定領域線集中度(Narrow width fixed line convergence degree)(以下、NF線集中度)という。 Therefore, in the present invention, in order to obtain the vector concentration line V, an assumed vector concentration line V (hereinafter referred to as a search line S) passing through the point of interest P (x, y) is assumed, and a neighboring region r 1 with respect to this search line S. And an evaluation value for the attention point P (x, y) at all the neighboring points Q (x, y) in the neighboring region r 2 , that is, a line concentration degree C N (x, y) described below, When the line concentration degree C N (x, y) indicates a predetermined value 1 or the maximum value indicating the vector concentration line V, this direction is set as the line direction. The line concentration degree C N (x, y) due to this narrow fixed neighborhood region of the present invention is referred to as a narrow width fixed line convergence degree (hereinafter referred to as NF line concentration degree).

さて図2(b)に示すように、ベクトル集中線Vと探索線の方向をφ、理想的な輝度勾配ベクトルの向きをψ、実際の輝度勾配ベクトルの方向をθとすると、近傍領域rに属する近傍点Q、近傍領域rに属するQにおける輝度勾配ベクトルのψは、ベクトル集中線Vと直交し、近傍領域rではψはψ=φ+π/2、近傍領域rではψはψ=φ−π/2となる。ところでQ(x,y)における実際の輝度勾配ベクトルは、ベクトルの流れに不規則性が加わり、理想的な輝度勾配ベクトルの方向ψと角度(θ−ψ)だけ傾斜する。そこで、個々の輝度勾配ベクトルの評価指標として(θ−ψ)、評価値として集中度c(θ-ψ)を使って実際の輝度勾配ベクトルを上記モデルとの関係で評価する。なお、θ,ψはx,yの関数である。集中度を表す関数c(t)はtをラジアン値とした場合、c(t)=c(−t)、c(t+π)=−c(t)、c(0)=1、c(π/2)=0であり、0≦t≦πで単調減少関数であればよい。実施例1においては、集中度c(θ-ψ)としてc(θ-ψ)=cos(θ-ψ)を採用している。なお、集中度c(θ-ψ)は上述の説明からも分るようにcos(θ-ψ)に限られるものではない。 As shown in FIG. 2B, if the direction of the vector concentration line V and the search line is φ, the direction of the ideal luminance gradient vector is ψ, and the direction of the actual luminance gradient vector is θ, the neighboring region r 1 The luminance gradient vector ψ at the neighboring point Q 1 belonging to and the Q 2 belonging to the neighboring region r 2 is orthogonal to the vector concentration line V, and in the neighboring region r 1 ψ is ψ = φ + π / 2, and in the neighboring region r 2 ψ Becomes ψ = φ−π / 2. By the way, the actual luminance gradient vector in Q (x, y) is inclined by an angle (θ−ψ) with the direction ψ of the ideal luminance gradient vector due to the irregularity of the vector flow. Therefore, the actual luminance gradient vector is evaluated in relation to the above model using (θ−ψ) as the evaluation index of each luminance gradient vector and the concentration degree c (θ−ψ) as the evaluation value. Note that θ and ψ are functions of x and y. The function c (t) representing the degree of concentration has c (t) = c (−t), c (t + π) = − c (t), c (0) = 1, c (π, where t is a radian value. / 2) = 0, and it may be a monotonously decreasing function with 0 ≦ t ≦ π. In the first embodiment, c (θ−ψ) = cos (θ−ψ) is employed as the degree of concentration c (θ−ψ). Note that the concentration degree c (θ−ψ) is not limited to cos (θ−ψ) as can be seen from the above description.

真のベクトル集中線Vが線分の方向φを向いている場合、P(x,y)を通る方向φの探索線Sの両翼の近傍領域r,r内では、Q(x,y)の輝度勾配ベクトルの方向θはθ=ψとなり、各点の集中度c(θ-ψ)は最大で、ベクトル集中線V上でc(0)=1となる。逆に集中度c(θ-ψ)が近傍領域r,r内で最大値を示すときは、そのφの方向がベクトル集中線Vの方向であり、このとき近傍領域r,r内の近傍点Q(x,y)の輝度勾配ベクトルは方向φと直交するベクトルとなる。そして、この集中度c(θ-ψ)が近傍領域r,r内のすべてで最大値1となったとき、注目点P(x,y)はベクトル集中線V上にあり、ベクトル集中線Vは方向φを向いていることを示す。実用上は正確に1でなくとも、1近傍であれば概ねφの方向を向いている。 When the true vector concentration line V faces the direction φ of the line segment, Q (x, y) in the vicinity regions r 1 and r 2 of both wings of the search line S in the direction φ passing through P (x, y). ) Of the brightness gradient vector of θ) is θ = ψ, the concentration degree c (θ−ψ) of each point is the maximum, and c (0) = 1 on the vector concentration line V. Conversely, when the degree of concentration c (θ−ψ) shows the maximum value in the neighboring regions r 1 and r 2 , the direction of φ is the direction of the vector concentration line V. At this time, the neighboring regions r 1 and r 2 The brightness gradient vector of the neighboring point Q (x, y) is a vector orthogonal to the direction φ. When the degree of concentration c (θ−ψ) reaches the maximum value 1 in all the neighboring regions r 1 and r 2 , the attention point P (x, y) is on the vector concentration line V, and the vector concentration Line V indicates that it is directed in the direction φ. In practice, even if it is not exactly 1, if it is close to 1, it is generally in the direction of φ.

そこで、仮定の注目点P(x,y)に対して近傍点Q(x,y)、ベクトル集中線Vと探索線Sまでの幅w、ベクトル集中線V,探索線Sの方向φとし、またP(x,y)近傍に各幅wmax,長さlの狭幅固定の近傍領域r,rを想定し、探索線Sに対する線集中度C(x,y)の計算を行う。狭幅固定の近傍領域r,rの面積の和をA(r)とすると、(数1)で線集中度C(x,y)が表すことができる。図3は本発明の実施例1の狭幅固定の近傍領域での線集中の説明図である。Wは探索領域の最大探索幅であり、W≧2wmaxである。しかし、幅wmaxWが影響しないようにWより十分小さい値を選ぶのがよく、実画像ではwmax=2〜5程度をとれば十分である。 Therefore, with respect to the assumed target point P (x, y), the neighboring point Q (x d , y d ), the width w between the vector concentrated line V and the search line S, the vector concentrated line V, the direction φ of the search line S In addition, assuming near-width fixed regions r 1 and r 2 each having a width w max and a length l in the vicinity of P (x, y), the line concentration degree C N (x, y) with respect to the search line S Perform the calculation. Assuming that the sum of the areas of the narrow-fixed neighboring regions r 1 and r 2 is A (r), the line concentration degree C N (x, y) can be expressed by (Equation 1). FIG. 3 is an explanatory diagram of line concentration in the vicinity of the fixed narrow width according to the first embodiment of the present invention. W is the maximum search width of the search area , and W ≧ 2w max . However, the width w max is better to choose a sufficiently smaller value than the W so as not to affect W is, in the real image it is sufficient to take the w max = 2~5 degree.

Figure 0004544891
図3によれば、近傍領域r内における輝度勾配ベクトルの集中度c(θ-ψ)は1、近傍領域r内におけるベクトル集中線Vと探索線S間の輝度勾配ベクトルの集中度c(θ-ψ)は−1、近傍領域r内の残りの領域の輝度勾配ベクトルの集中度c(θ-ψ)は1であるから、C(x,y)={(wmax−w)×1+(−1)×w+1×wmax}/2wmax=1−w/wmaxとなる。ベクトル集中線Vと探索線Sが一致したときは、注目点P(x,y)はベクトル集中線V上に存在し、C(x,y)=1となることが分る。
Figure 0004544891
According to FIG. 3, the concentration degree c (θ−ψ) of the luminance gradient vector in the neighboring region r 2 is 1, and the concentration degree c of the luminance gradient vector between the vector concentration line V and the search line S in the neighboring region r 1 . Since (θ−ψ) is −1 and the degree of concentration c (θ−ψ) of the luminance gradient vector in the remaining region r 1 is 1, C N (x, y) = {(w max − w) × 1 + (− 1) × w + 1 × w max } / 2w max = 1−w / w max When the vector concentrated line V and the search line S coincide with each other, it can be seen that the attention point P (x, y) exists on the vector concentrated line V and C N (x, y) = 1.

このように実施例1の画像処理は、処理の画像に対して仮定の注目点P(x,y)を次々と移動し、それぞれで狭幅固定の近傍領域r,rを想定して(数1)を計算し、C(x,y)=1となる探索線Sを求めれば、この線が画像上の尾根であるベクトル集中線Vということが分る。 As described above, in the image processing according to the first embodiment, the assumed attention point P (x, y) is successively moved with respect to the image to be processed, and each of the adjacent narrow regions r 1 and r 2 is assumed. If (Expression 1) is calculated and a search line S with C N (x, y) = 1 is obtained, it can be seen that this line is a vector concentration line V that is a ridge on the image.

ところで、(数1)の第2項の「cos(θ-φ+π/2)」は「−cos(θ-φ-π/2)」と書き換えることができので、第1項、第2項は共に「cos(θ-φ-π/2)」という同一の計算式を使用し、C(x,y)は近傍領域r,r内で同じ計算値「cos(θ-φ-π/2)/A(r)」の加算か減算を行ったものであることを示している。本発明はこの性質を利用し、方向φと輝度勾配ベクトルの方向θをn個に分割して、複数の計算値「cos(θ-φ-π/2)/A(r)」を基礎加算値として予め蓄積しておくものである。従って、第1段階として(数1)の基礎加算値を計算しておき、第2段階として狭幅固定の近傍領域r,r内の輝度勾配ベクトルの方向θを実測して方向φとの組合せから、予め蓄積してある候補の中から、対応する1つの基礎加算値を取り出して加算、減算することで高速化を実現するものである。 By the way, “cos (θ−φ + π / 2)” in the second term of (Equation 1) can be rewritten as “−cos (θ−φ−π / 2)”. Both terms use the same calculation formula “cos (θ−φ−π / 2)”, and C N (x, y) is the same calculated value “cos (θ−φ) in the neighboring regions r 1 and r 2 . -π / 2) / A (r) "is added or subtracted. The present invention utilizes this property, divides the direction φ and the direction θ of the luminance gradient vector into n, and performs basic addition of a plurality of calculated values “cos (θ−φ−π / 2) / A (r)”. This value is stored in advance as a value. Accordingly, the basic addition value of (Equation 1) is calculated as the first step, and the direction θ of the luminance gradient vector in the narrow fixed regions r 1 and r 2 is measured as the second step to obtain the direction φ and From this combination, one corresponding basic addition value is extracted from the candidates accumulated in advance, and is added and subtracted to achieve high speed.

これに対し従来のFA線集中度法では、近傍領域r,rの長さlと最大探索幅Wから近傍点(i,j)の範囲を計算し、この近傍点(i,j)の探索線Sに対する線集中度測定と、これに加えてW、Wの幅推定を実行しなければならず、膨大な計算量を必要とするが、本発明においては既知の計算値の加算、減算にすることができ、計算量を極端に減らすことができるものである。 On the other hand, in the conventional FA line concentration method, the range of the neighboring point (i, j) is calculated from the length l of the neighboring regions r 1 and r 2 and the maximum search width W, and this neighboring point (i, j) Measurement of the line concentration with respect to the search line S and the width estimation of W r and W l must be executed in addition to this, and a huge amount of calculation is required. Addition and subtraction can be performed, and the amount of calculation can be drastically reduced.

さらに、本発明においては、線を示す線状凸領域でなく、面を示すエッジ部分の場合を除くことができ、線状凸領域だけを抽出することができる。図3に示す線状凸領域を示す場合、w>Wでは輝度勾配ベクトルの集中度c(θ-ψ)は0となる。これに対し、これが面を示すベクトル場であった場合、ベクトル集中線Vの右半面はw>Wで集中度c(θ-ψ)は1となる。このとき、C(x,y)={(wmax−W)×1+(−1)×W}/2wmax=1/2−W/wmaxとなり、C(x,y)は0・5を越えない。すなわち、線集中度C(x,y)に0.5の閾値を設け、C(x,y)>0.5を抽出すればこのようなエッジ部分等を抽出したデータから排除できるものである。 Furthermore, in the present invention, it is possible to exclude not only a linear convex region indicating a line but an edge portion indicating a surface, and only a linear convex region can be extracted. In the case of the linear convex region shown in FIG. 3, the concentration degree c (θ−ψ) of the brightness gradient vector is 0 when w> W. On the other hand, when this is a vector field indicating a surface, the right half surface of the vector concentration line V is w> W and the concentration degree c (θ−ψ) is 1. At this time, C N (x, y) = {(w max −W) × 1 + (− 1) × W} / 2 w max = ½−W / w max , and C N (x, y) is 0.・ Do not exceed 5. That is, by setting a threshold value of 0.5 to the line concentration degree C N (x, y) and extracting C N (x, y)> 0.5, such an edge portion or the like can be excluded from the extracted data. It is.

そこで、以下、本発明の実施例1の線抽出のための画像処理の手順について図4、図5のフローチャートに基づいて説明する。図4は本発明の実施例1における線抽出のための画像処理の基礎加算値算出のフローチャート、図5は本発明の実施例1における線抽出のための画像処理の線集中度算出のフローチャート、図6は本発明の実施例1における狭幅固定の近傍領域の画素配列説明図、図7は本発明の実施例1における線抽出のための画像処理の線抽出処理のフローチャート、図8は本発明の実施例1における線集中フィルタのブロック構成図、図9はFA法、実施例1の狭幅固定の近傍領域、実施例2の分離分割した狭幅固定の近傍領域による線集中度の分布図である。本実施例1の画像処理の手順は、(1)基礎加算値演算ステップと、(2)線集中度演算ステップ、(3)線抽出ステップから構成される。   Therefore, the image processing procedure for line extraction according to the first embodiment of the present invention will be described below with reference to the flowcharts of FIGS. FIG. 4 is a flowchart of basic addition value calculation of image processing for line extraction in Embodiment 1 of the present invention, and FIG. 5 is a flowchart of calculation of line concentration degree of image processing for line extraction in Embodiment 1 of the present invention. FIG. 6 is an explanatory diagram of a pixel arrangement in a fixed region in the narrow width in the first embodiment of the present invention, FIG. 7 is a flowchart of line extraction processing of image processing for line extraction in the first embodiment of the present invention, and FIG. FIG. 9 is a block diagram of a line concentration filter according to the first embodiment of the invention. FIG. 9 shows the distribution of the line concentration by the FA method, the narrow fixed vicinity region of the first embodiment, and the narrow fixed fixed vicinity region of the second embodiment. FIG. The image processing procedure of the first embodiment includes (1) a basic addition value calculation step, (2) a line concentration degree calculation step, and (3) a line extraction step.

まず、(1)基礎加算値演算ステップについて説明する。画像の点P(x,y)を通る探索線Sの両翼にそれぞれ幅wmax,長さlの狭幅固定の近傍領域r,rを想定するとともに、探索線Sの方向φと輝度勾配ベクトルの方向を離散化するための分割数nを設定する(step1)。この最初のP(x,y)を原点としたローカル座標を用意し、幅wmax,長さlの近傍領域r,rを構成する各画素の位置を設定する(step2)。近傍領域r,rに含まれる画素の画素数をカウントし、それぞれM,Mを得る(step3)。 First, (1) the basic addition value calculation step will be described. Assuming narrow areas of fixed widths r 1 and r 2 having a width w max and a length l on both wings of the search line S that passes through the point P (x, y) of the image, the direction φ and the brightness of the search line S are assumed. A division number n for discretizing the direction of the gradient vector is set (step 1). The local coordinates with the first P (x, y) as the origin are prepared, and the positions of the pixels constituting the neighboring regions r 1 and r 2 having the width w max and the length l are set (step 2). The number of pixels included in the neighboring areas r 1 and r 2 is counted, and M 1 and M 2 are obtained (step 3).

続いて、近傍領域r,rの画素に図6に示すように連番mを割り当て、近傍領域r内の近傍点Q(x,y)をこのmの関数として表したローカル座標の位置座標(dx(m),dy(m))、近傍領域r内の近傍点Qをこのmの関数として表した位置座標(dx(m),dy(m))を計算する(step4)。次いでt=(n/2π)θとおき、φをnで離散化し、近傍領域r,rで(数2)(数3)を演算する(step5)。すなわち、近傍領域rで(数2)、m=0〜M、t=0〜(n−1)である。なお、φはΔφごとに離散化されるSubsequently, as shown in FIG. 6, sequential numbers m are assigned to the pixels in the neighboring areas r 1 and r 2 , and the neighboring point Q (x d , y d ) in the neighboring area r 1 is expressed as a function of this m. position coordinates of (d 1 x (m), d 1 y (m)), the position coordinates showing the vicinity of point Q in the neighboring region r 2 as a function of the m (d 2 x (m) , d 2 y (M)) is calculated (step 4). Next, t = (n / 2π) θ is set, φ is discretized with n, and (Equation 2) and (Equation 3) are calculated with the neighboring regions r 1 and r 2 (step 5). That is, in the neighborhood region r 1 (Equation 2), m = 0 to M 1 , t = 0 to (n−1). Note that φ is discretized for each Δφ .

Figure 0004544891

同様に、近傍領域rで(数3)、m=0〜M、t=0〜(n−1)となる。
Figure 0004544891

Similarly, in the neighborhood region r 2 (Equation 3), m = 0 to M 2 and t = 0 to (n−1).

Figure 0004544891

続いて、近傍領域r,rの画素の連番を結合する(step6)。すなわち、近傍領域rにおいてC(t,m)=C(t,m)、dx(m)=dx(m)、dy(m)=dy(m)とし、m=0〜M、t=0〜(n−1)である。また、近傍領域rにおいて、C(t,m+M)=C(t,m)、dx(m+M)=dx(m)、dy(m+M)=dy(m)とし、m=0〜M、t=0〜(n−1)とするものである。
Figure 0004544891

Subsequently, the sequential numbers of the pixels in the neighboring regions r 1 and r 2 are combined (step 6). That is, C (t, m) = C 1 (t, m), dx (m) = d 1 x (m), dy (m) = d 1 y (m) in the neighborhood region r 1 , m = 0 ~M 1, is t = 0~ (n-1) . In the neighboring region r 2 , C (t, m + M 1 ) = C 2 (t, m), dx (m + M 1 ) = d 2 x (m), dy (m + M 1 ) = d 2 y (m) , M = 0 to M 2 , t = 0 to (n−1).

このように実施例1の基礎加算値演算ステップによって、各画素の位置を特定するm(m=0〜M)、全画素数M=M+M、探索線Sの方向φ及び輝度勾配ベクトルの方向を離散化する分割数nを予め設定することにより、(2)線集中演算ステップの演算で使用する各画素の集中度の候補となる基礎加算値C(t,m)、C(t,m)を1つの基礎加算値C(t,m)、ここでm=0〜M、t=0〜(n−1)として表現し、予め演算して記憶手段に格納しておくことができる。同様に、dx(m)、dy(m)をm=0〜Mで特定することができる。なお、パラメータφもφ=0〜(n−1)で変化させ、それぞれのφで基礎加算値C(t,m)を計算する。 In this way, m (m = 0 to M) for specifying the position of each pixel, the total number of pixels M = M 1 + M 2 , the direction φ of the search line S, and the luminance gradient vector by the basic addition value calculation step of the first embodiment. (2) Basic addition values C 1 (t, m) and C 2 that are candidates for the degree of concentration of each pixel used in the calculation of the line concentration calculation step. (T, m) is expressed as one basic addition value C (t, m), where m = 0 to M, t = 0 to (n−1), calculated in advance and stored in the storage means. be able to. Similarly, dx (m) and dy (m) can be specified by m = 0 to M. The parameter φ is also changed at φ = 0 to (n−1), and the basic addition value C (t, m) is calculated at each φ.

ところで、集中度c(x)は、φ方向にy軸をとったとき、c(x)=c(−x)、c(x+π)=−c(x)、c(0)=1、c(π/2)=0であって、0≦x≦πで単調減少関数であればよく、(数1)の「cos(θ-φ-π/2)」「cos(θ-φ+π/2)」に代えて、この条件を満たす他の関数f(θ-φ-π/2)、f(θ-φ+π/2)を使うことができる。このような関数としてcos(θ-φ-π/2)、但しkは奇数、のような余弦関数の冪乗関数がある。このときf(θ-φ+π/2)=−f(θ-φ-π/2)であるから、step5の演算は、近傍領域rにおいて、C(t,m)=f(t-φ,dx(m),dy(m))、m=0〜M、t=0〜(n−1)とし、近傍領域rにおいて、C(t,m)=f(t-φ,dx(m),dy(m))、m=0〜M、t=0〜(n−1)とすればよい。このcos(θ-φ-π/2)ような場合、θ=φ+π/2を満たす付近で急激な減少をするため高感度の線フィルタとすることができる。 By the way, the degree of concentration c (x) is c (x) = c (−x), c (x + π) = − c (x), c (0) = 1 when the y-axis is taken in the φ direction. , C (π / 2) = 0, and 0 ≦ x ≦ π, which is a monotone decreasing function, and “cos (θ−φ−π / 2)” “cos (θ−φ) in (Expression 1)” Instead of “+ π / 2)”, other functions f (θ−φ−π / 2) and f (θ−φ + π / 2) satisfying this condition can be used. As such a function, there is a power function of a cosine function such as cos k (θ−φ−π / 2), where k is an odd number. At this time, since f (θ−φ + π / 2) = − f (θ−φ−π / 2), the calculation of step 5 is C 1 (t, m) = f (t in the neighboring region r 1 . -φ, d 1 x (m) , d 1 y (m)), m = 0~M 1, t = 0~ the (n-1), in the region near r 2, C 2 (t, m) = f (t−φ, d 2 x (m), d 2 y (m)), m = 0 to M 2 , and t = 0 to (n−1) may be used. In the case of this cos k (θ−φ−π / 2), since it rapidly decreases near θ = φ + π / 2, a highly sensitive line filter can be obtained.

続いて、実際の画像の線集中度を演算するときに実行する(2)線集中度演算ステップの説明を行う。(1)基礎加算値演算ステップで予め次のステップで使用する各画素の集中度の候補となる基礎加算値が計算してあるため、線の方向φと輝度勾配ベクトルの方向θを分割する分割数n、原画像の輝度I(x,y)を入力すれば、線集中度C(x,y)は、(数1)によって各画素における上記基礎加算値C(t,m)の総和として得られるものである。 Subsequently, the (2) line concentration degree calculation step executed when calculating the line concentration degree of the actual image will be described. (1) Since the basic addition value that is a candidate for the degree of concentration of each pixel used in the next step is calculated in advance in the basic addition value calculation step, the division for dividing the line direction φ and the luminance gradient vector direction θ If the number n and the luminance I (x, y) of the original image are input, the line concentration degree C N (x, y) is the sum of the basic addition values C (t, m) in each pixel according to (Formula 1). It is obtained as

図5において、φ(言い換えると分割数n)、輝度I(x,y)を入力し(step11)、画像中の注目点P(x,y)の位置計算するに当たり、近傍点Q(x,y)の位置座標と線集中度C)を初期化する(step12)。これにより探索線の方向φ、近傍領域r,rの座標(dx(m),dy(m))はn、m(m=0〜M)で表現可能になる。 In FIG. 5, φ (in other words, the number of divisions n) and the luminance I (x, y) are input (step 11), and in calculating the position of the attention point P (x, y) in the image, the neighboring point Q (x d, the position coordinates and the line concentration degree C N (x of y d), y) to initialize (step 12). Thereby, the direction φ of the search line and the coordinates (dx (m), dy (m)) of the neighboring areas r 1 and r 2 can be expressed by n and m (m = 0 to M).

続いて、近傍点Qの輝度勾配ベクトルの方向θを求め(step13)、t=(n/2π)θを演算し、整数化する(step14)。さらにφの入力の有無がチェックされ(step15)、φの指定がない場合はφ=0としてφを初期化する(step16)。場合によりφの指定があるときはφ=φと設定する(step17)。m=0としてmを初期化する(step18)。次に、=x+dx(m)、=y+dy(m)の計算を行い、注目点P(x,y)の座標を計算する(step19)。そして基礎加算値C(t,m)の候補の中から1つを選びC(x,y)=C(x,y)+C(t,m)を計算し(step20)、この位置の線集中度C(x,y)を求める。 Subsequently, the direction θ of the luminance gradient vector of the neighboring point Q is obtained (step 13), t = (n / 2π) θ is calculated, and is converted into an integer (step 14). Further, the presence / absence of φ input is checked (step 15). If φ is not specified, φ = 0 is initialized (step 16). Optionally when there is specification of phi is set as φ = φ 0 (step17). m is initialized as m = 0 (step 18). Then, x = x d + dx ( m), performs calculation of y = y d + dy (m ), calculates the coordinates of the target point P (x, y) (step19 ). Then, one of the candidates for the basic addition value C (t, m) is selected and C N (x, y) = C N (x, y) + C (t, m) is calculated (step 20). The line concentration degree C N (x, y) is obtained.

次いで、m=m+1を計算し(step21)、mがMと一致するか否かを判断し(step22)、mがMと一致しない場合はstep19に戻って座標の計算を繰り返す。step22でmがMと一致したら、φの指定の有無が再度チェックされ(step23)、φの指定がない場合φ=φ+1を計算し(step24)、さらにφが(n−1)と一致するか否かを判断するとともに、φの指定の指定があってφ=φの場合は(φ+1)と一致するかを判断し(step25)、いずれも一致しない場合はstep18へ戻って計算を繰り返す。step23でφ=φの指定があった場合、またはstep25でφが(n−1)と一致した場合は、近傍点Q(x,y)の移動が必要か否かが判定され(step26)、移動の必要がなければQ(x,y)の測定は終了し、移動の必要があれば新たなQ(x,y)に移動してstep14に戻って繰り返す。 Next, m = m + 1 is calculated (step 21), and it is determined whether or not m matches M (step 22). If m does not match M, the process returns to step 19 to repeat the calculation of coordinates. If m matches with M in step 22, whether or not φ is specified is checked again (step 23). If φ is not specified, φ = φ + 1 is calculated (step 24), and whether φ matches (n−1). In addition, if φ is specified and φ = φ 0 , it is determined whether it matches (φ 0 +1) (step 25). If both do not match, return to step 18 to calculate. repeat. If there is designation of phi = phi 0 in step 23, or if phi matches the (n-1) at step 25, the neighboring point Q (x d, y d) whether or not the mobile needs of the determination ( Step 26) If the movement is not necessary, the measurement of Q (x d , y d ) is terminated. If the movement is necessary, the measurement is moved to a new Q (x d , y d ), and the process returns to step 14 and is repeated.

さらに(3)線抽出ステップについて説明すると、このステップでは線集中度C(x,y)がノイズ耐性値0.5より大きい場合に線状凸領域と判断し、C(x,y)の値に従って現画像で線と評価できる部分を抽出して線情報として出力するものである。なお、以下の線抽出ステップでは、さらに線画像を明確に出力するために強調処理を行っている。線抽出処理は、図6に示すように線集中度C(x,y)が入力されると(step31)、画像中の注目点P(x,y)を原点とした座標系を選び、近傍点Q(x,y)の位置座標を初期化する(step32)。入力されたφごとの線集中度C(x,y)の中で最大値を求め、この最大値のC(x,y)がC(x,y)>0.5か否かを判定し(step33)、C(x,y)>0.5の場合はC(x,y)=1を出力する(step34)。φの指定がない場合は、同時に最大値を示したφを線の方向を示すφとして出力する。step33において、C(x,y)≦0.5の場合はC(x,y)=0を出力し(step35)、近傍点Q(x,y)の移動が必要か否かが判定され(step36)、移動の必要がなければQ(x,y)の測定は終了し、移動の必要があれば新たなQ(x,y)に対いてstep33に戻って繰り返す。 Further, (3) the line extraction step will be described. In this step, when the line concentration degree C N (x, y) is larger than the noise tolerance value 0.5, it is determined as a linear convex region, and C N (x, y) A portion that can be evaluated as a line in the current image is extracted according to the value of and is output as line information. It should be noted that in the following line extraction step, enhancement processing is performed in order to output a line image more clearly. In the line extraction process, when the line concentration C N (x, y) is input as shown in FIG. 6 (step 31), a coordinate system with the attention point P (x, y) in the image as the origin is selected, The position coordinates of the neighboring point Q (x d , y d ) are initialized (step 32). A maximum value is obtained from the input line concentration C N (x, y) for each φ, and whether or not C N (x, y)> 0.5 is the maximum value of C N (x, y). Is determined (step 33), and when C N (x, y)> 0.5, C L (x, y) = 1 is output (step 34). If there is no designation of φ, φ indicating the maximum value at the same time is output as φ indicating the direction of the line. In step 33, if C N (x, y) ≦ 0.5, C L (x, y) = 0 is output (step 35), and it is necessary to move the neighboring point Q (x d , y d ). Is determined (step 36), the measurement of Q (x d , y d ) is terminated if no movement is necessary, and the process returns to step 33 for a new Q (x d , y d ) if movement is necessary. repeat.

ところで、以上説明したノイズ耐性値0.5は、上述したように、面領域のエッジ部分の示す線に対してはC(x,y)が0・5を越えないことに基づくものである。このエッジ部分を示すデータは(3)線抽出ステップの処理によって線画像出力から除去できる。そして、その他のノイズがあっても、本実施例1においては尾根線の非常に狭い小さな領域だけでC(x,y)を検出するから、ノイズの影響はほぼ除去され、線画像だけを確実に出力することができる。 Incidentally, the noise tolerance value 0.5 described above is based on the fact that C N (x, y) does not exceed 0.5 for the line indicated by the edge portion of the surface region as described above. . The data indicating the edge portion can be removed from the line image output by the process of (3) line extraction step. Even if there is other noise, C N (x, y) is detected only in a very narrow small area of the ridge line in the first embodiment, so that the influence of noise is almost eliminated and only the line image is obtained. It can output reliably.

このように本発明の実施例1においては、(1)基礎加算値演算ステップで、予めφ,m,tごとにその後の(2)線集中演算ステップの演算で使用する各画素の基礎加算値C(t,m)を演算しておき、(2)線集中演算ステップにおいては、近傍点Qの輝度勾配ベクトルの方向θを求めることによりtを計算し、基礎加算値C(t,m)を使って既知のφもしくは各φごとに線集中度C(x,y)を算出し、(3)線抽出ステップでこの集合の中から線状凸領域と評価できるC(x,y)を計算するので、従来のGAWHT法のように線集中度C(x,y)と線の幅W、Wの計算を同時に進めるような膨大な計算を必要とせず、予備的演算で得た基礎加算値C(t,m)を選んで単純な加算を繰り返すだけで、きわめて短時間のうちに画像の中の線状凸領域、とくにその尾根線だけを抽出することができる。 As described above, in the first embodiment of the present invention, (1) the basic addition value calculation step, the basic addition value of each pixel used in the calculation of the subsequent (2) line concentration calculation step every φ, m, t in advance. C (t, m) is calculated, and in the (2) line concentration calculation step, t is calculated by obtaining the direction θ of the luminance gradient vector of the neighboring point Q, and the basic addition value C (t, m) Is used to calculate the line concentration C N (x, y) for each known φ 0 or each φ, and (3) C N (x, x, which can be evaluated as a linear convex region from this set in the line extraction step. Since y) is calculated, unlike the conventional GAWHT method, the calculation of the line concentration C N (x, y) and the line widths W r and W l is not required to be performed at the same time. By simply selecting the basic addition value C (t, m) obtained by calculation and repeating simple addition, The line-shaped protruding region in the image within the time, in particular can be extracted only that ridge line.

また、(x,y,φ)から構成されるオブジェクト空間で、特定のφを指定してスライスした場合、このφ方向の情報が抽出され、さらに詳細な解析が可能になる。例えば、φ=φ,φのようにφを指定して、このときのC(x,y,φ)、C(x,y,φ)を求め、それぞれで0.5より大きい場合を抽出すれば、φ方向を向いた線情報とφ方向を向いた線情報の分離が可能になる。φでスライスした空間と、φでスライスした空間においては、φを指定せずにC(x,y,φ)が最大値となる線情報で脱落した情報が、それぞれの空間で抽出可能になる。 In addition, when a specific φ 0 is specified and sliced in an object space composed of (x, y, φ), information on this φ 0 direction is extracted, and further detailed analysis becomes possible. For example, φ is designated as φ = φ A and φ B , and C N (x, y, φ A ) and C N (x, y, φ B ) at this time are obtained, and 0.5 respectively be extracted is greater than, allowing separation of the line information facing the line information and phi B direction toward the phi a direction. and space sliced in phi A, in the space that slice phi B, C N without specifying φ (x, y, φ) information that was eliminated in the line information having the maximum value, extraction with respective spaces It becomes possible.

例えば、φ,φ方向の2つの線が交叉している場合、交叉部分では一方のスライス空間にカメラ側の物体が示す線が抽出され、もう一方のスライス空間に下側の物体を示す線が抽出される。多くの場合下側の線には交叉部分に欠損があるように抽出される。この欠損部分の輪郭は明確でないから、この出力画像を縮小などして不鮮明な部分の情報量を低下させ、再度φを変化させてC(x,y,φ)空間で線画像を出力させるなどすればより明瞭に立体構造を明確化できる。 For example, when two lines in the directions of φ A and φ B intersect, a line indicated by an object on the camera side is extracted in one slice space and the lower object is indicated in the other slice space. A line is extracted. In many cases, the lower line is extracted so that there is a defect in the crossing portion. Since the outline of the missing portion is not clear, the output image is reduced to reduce the amount of information in the unclear portion, and φ is changed again to output a line image in the C N (x, y, φ) space. Etc., the three-dimensional structure can be clarified more clearly.

さらに、C(x,y,φ)が0.5より大きい線情報を抽出するという2値画像抽出だけでなく、0.5〜0.6、0.6〜0.7、0.7〜0.8、0.8以上といった多値画像を抽出することもでき、画像にグラデーションをつけることでさらに見易い画像にすることができる。 Furthermore, not only binary image extraction in which line information with C N (x, y, φ) larger than 0.5 is extracted, but also 0.5 to 0.6, 0.6 to 0.7, 0.7 It is possible to extract multi-value images such as .about.0.8 and 0.8 or more, and it is possible to make the image easier to see by adding gradation to the image.

このように本発明の実施例1の線抽出のための画像処理は、方向φを変化させるだけでなく、φを指定して処理することにより画像の立体的な解析が可能になる。そして(x,y,φ)のオブジェクト空間として出力することにより、多様な画像解析が可能になる。   As described above, the image processing for line extraction according to the first embodiment of the present invention not only changes the direction φ, but also enables processing of a three-dimensional image by specifying φ and processing. Then, by outputting as an object space of (x, y, φ), various image analysis becomes possible.

続いて、以上説明した画像処理を実行する実施例1のプログラムと線集中フィルタのブロック構成について説明する。図8において、1は上述の(1)基礎加算値演算ステップの手順を実行する基礎加算値演算手段、2は同様に(2)線集中度演算ステップの手順を実行する線集中度演算手段、3は(3)線抽出ステップの手順を実行する線抽出手段である。   Subsequently, the block configuration of the program and the line concentration filter of the first embodiment that executes the image processing described above will be described. In FIG. 8, 1 is a basic addition value calculation means for executing the procedure of the above-mentioned (1) basic addition value calculation step, 2 is a line concentration degree calculation means for executing the procedure of (2) the line concentration degree calculation step, Reference numeral 3 denotes line extraction means for executing the procedure of (3) line extraction step.

基礎加算値演算手段3は、近傍領域r,rを構成する各画素の画素数をカウントし、近傍領域r,r内の近傍点Qの位置を計算し、θ,φを離散化し、θ,φのあらゆる変化に対応した基礎加算値(数2)(数3)を演算する。近傍領域r,rのそれぞれの基礎加算値は、(θ−φ)が同じであれば近傍領域r,rで同一になる性質を利用するものであり、近傍領域r,rの画素の連番を結合することにより、通番で演算ができるようにするものである。また、線集中度演算手段2は、輝度勾配ベクトルの方向θを計算し、すべての近傍点Qに対してφが既知の場合、θ,φの値から基礎加算値を単純に加算してC(x,y)を演算する。また、φが未定の場合は離散化したφを変化させ、各φごとに基礎加算値を加算して最大値となるC(x,y)を演算する。これによりφとC(x,y)が求められる。さらに線抽出手段3は、線集中度演算手段2が演算したC(x,y)がノイズ耐性値0.5より大きい値だけを出力し、C(x,y)>0.5の場合、C(x,y)をC(x,y)=1、C(x,y)≦0.5の場合、C(x,y)をC(x,y)=0として、線画像を鮮明にすることができる。これは2値に限らず、多値で実行できる。 Basic addition value computing means 3 counts the number of pixels of each pixel constituting the neighboring region r 1, r 2, to calculate the position of the neighboring point Q in the neighboring region r 1, r 2, theta, discrete and φ The basic addition values (Equation 2) and (Equation 3) corresponding to all changes in θ and φ are calculated. Each of the basic sum of neighboring regions r 1, r 2 is, (θ-φ) are those that utilize the property of the same across the region near r 1, r 2 If so, the neighboring region r 1, r By combining the serial numbers of the two pixels, the calculation can be performed using the serial numbers. Further, the line concentration degree calculating means 2 calculates the direction θ of the luminance gradient vector, and when φ is known for all neighboring points Q, the basic addition value is simply added from the values of θ and φ to obtain C N (x, y) is calculated. When φ is not yet determined, the discretized φ is changed, and the basic addition value is added for each φ to calculate C N (x, y) that is the maximum value. Thus, φ and C N (x, y) are obtained. Furthermore, the line extraction means 3 outputs only the value of C N (x, y) calculated by the line concentration degree calculation means 2 that is larger than the noise tolerance value 0.5, and C N (x, y)> 0.5. If, C N (x, y) and C L (x, y) = 1, C N (x, y) when a ≦ 0.5, C N (x, y) and C L (x, y) = As 0, the line image can be sharpened. This is not limited to binary, and can be executed in multiple values.

この実施例1のプログラムは、コンピュータ(図示しない)の中央処理演算装置(CPU)に読み込まれ、(1)基礎加算値演算ステップ、2)線集中度演算ステップ、(3)線抽出ステップの各演算ステップがその手順に従って実行されるものである。また、実施例1の線画像フィルタは、このプログラムがコンピュータに搭載されてその手順に従って処理されるもので、コンピュータとプログラムによって機能実現手段として構成される。   The program according to the first embodiment is read into a central processing unit (CPU) of a computer (not shown), and includes (1) a basic addition value calculation step, 2) a line concentration degree calculation step, and (3) a line extraction step. The calculation step is executed according to the procedure. The line image filter according to the first embodiment is such that this program is installed in a computer and processed according to the procedure, and is configured as a function realization means by the computer and the program.

そこで以上説明した狭幅固定の近傍領域による線抽出のための画像処理と他の処理との比較を行う。図9の(a)は従来のFA線集中度を示し、(b)に示すNF線集中度は実施例1の狭幅固定の近傍領域による線集中度を示している。FA線集中度は尾根線のところで1を示し、線幅の境界で0.5の線集中度となっている。輝度の微分を使うのではなく、ベクトルの向く方向で線集中度を計算するのでコントラストの影響は受けにくいが、広い領域で線の幅推定とともに線集中度を計算するので、ノイズの影響を受け易い。これに対し、実施例1の狭幅固定の近傍領域による線集中度は、単位幅程度の非常に狭い小さな領域、さらにC(x,y)が0.5を越える領域だけで線集中度が検出されるから、ノイズの影響をほとんど受けず、尾根線だけをシャープに抽出することができることが分る。 Therefore, a comparison is made between the image processing for line extraction in the vicinity region having the fixed narrow width described above and other processing. 9A shows the conventional FA line concentration degree, and the NF line concentration degree shown in FIG. 9B shows the line concentration degree in the vicinity of the fixed width in the first embodiment. The FA line concentration is 1 at the ridge line, and the line concentration is 0.5 at the line width boundary. Rather than using luminance differentiation, the line concentration is calculated in the direction of the vector, so it is not affected by contrast, but the line concentration is calculated together with the line width estimation in a wide area, so it is affected by noise. easy. On the other hand, the line concentration degree in the vicinity region with the fixed fixed width according to the first embodiment is such that the line concentration degree is small only in a very narrow small region of about unit width, and only in a region where C N (x, y) exceeds 0.5. It can be seen that only the ridge line can be extracted sharply without being affected by noise.

そこで、医療用画像に対して実際に実施例1の画像処理と他の処理を行った結果について説明する。図10は本発明の実施例1の線集中度画像フィルタと他のフィルタで乳癌の腫瘤影を解析した写真である。図10の(a)は原画像のX線写真、(b)は実施例1の線画像フィルタで処理した2値画像、(c)は実施例1の線集中度画像フィルタで処理した多値画像、(d)はFA線集中度フィルタで処理した2値画像、(e)はFA線集中度フィルタで処理した多値画像、(f)は従来の微分フィルタで処理した2値画像である。   Therefore, the result of actually performing the image processing of the first embodiment and other processing on the medical image will be described. FIG. 10 is a photograph obtained by analyzing a mass shadow of breast cancer using the line concentration degree image filter of Example 1 of the present invention and another filter. 10A is an X-ray photograph of the original image, FIG. 10B is a binary image processed by the line image filter of the first embodiment, and FIG. 10C is a multi-value processed by the line concentration degree image filter of the first embodiment. (D) is a binary image processed by the FA line concentration filter, (e) is a multi-value image processed by the FA line concentration filter, and (f) is a binary image processed by a conventional differential filter. .

(a)の原画像となるX線写真は腫瘤影がぼんやりとした状態で写っており、専門家でなければ情報を読み取ることは難しい。従来はこの情報を微分フィルタで微分し、(f)のような画像として診て判断していた。(f)によれば、中央の白いリングが多くのノイズの中に浮き出て、周囲に血管やスピキュラとみられる放射状の線が伸びているのが分るが、きわめて不明確ではある。しかし、コントラストで雑情報が多く、また白いリングは微分による腫瘤影の境界を示すもので、スピキュラを見難くしている。   The X-ray photograph that is the original image of (a) is shown with a blurred shadow of the tumor, and it is difficult to read the information unless it is an expert. Conventionally, this information is differentiated by a differential filter, and judged as an image as shown in (f). According to (f), it can be seen that the white ring at the center is raised in a lot of noise, and radial lines that appear to be blood vessels and spicules extend around it, but it is very unclear. However, there is a lot of miscellaneous information with contrast, and the white ring shows the boundary of the shadow of the tumor by differentiation, making it difficult to see the spicula.

これに対し、(b)の実施例1の線集中度画像フィルタで処理した2値画像は、線だけが検出され、境界を示すリング状の部分は取り除かれており、長く放射状に伸びるスピキュラと多数の血管等の線状の部分が明瞭に抽出されている。(c)の多値画像によればさらに鮮明且つ立体的な腫瘤影の線構造が抽出されており、悪性腫瘍と良性腫瘍の判別が容易である。悪性腫瘍の診断のためには線の抽出が重要で、ノイズの影響を断つ必要があるが、実施例1の線集中度画像フィルタはこの目的に適ったフィルタということができる。次に、(d)のFA線集中度フィルタで処理した2値画像は、線が正しい太さで抽出されるとともに、実施例1の線集中度画像フィルタとほぼ同様の優れた結果を示しているが、雑然として交錯する線が目立つ。これは、FA線集中度フィルタが実施例1の線集中度画像フィルタと比較してノイズの影響を受け易いからであり、計算量、計算時間が実施例1の線集中度画像フィルタより2〜4桁程度高くなる。これは、実施例1の線集中度画像フィルタが狭幅固定の近傍領域を用い、φ、θの離散化、集中度を評価する余弦関数の特性を利用し、輝度勾配ベクトルのθの演算と基礎加算値の演算を分離することに成功したからであり、予め基礎加算値を蓄積しておくため、輝度勾配ベクトルのθの測定回数が少なくとも1/M(Mは総画素数)以下になるからである。   On the other hand, in the binary image processed by the line concentration degree image filter of Example 1 in (b), only the line is detected, the ring-shaped part indicating the boundary is removed, and the long and radially extending spicule A linear portion such as a large number of blood vessels is clearly extracted. According to the multi-value image of (c), a clearer and more three-dimensional line structure of the tumor shadow is extracted, and it is easy to distinguish between a malignant tumor and a benign tumor. Extraction of lines is important for diagnosis of malignant tumors, and it is necessary to cut off the influence of noise, but the line concentration degree image filter of Example 1 can be said to be a filter suitable for this purpose. Next, in the binary image processed by the FA line concentration filter of (d), the lines are extracted with the correct thickness, and an excellent result almost the same as the line concentration image filter of Example 1 is shown. However, the lines that are messy and intertwined are conspicuous. This is because the FA line concentration filter is more susceptible to noise than the line concentration image filter of the first embodiment, and the calculation amount and the calculation time are 2 to 2 more than those of the line concentration degree image filter of the first embodiment. 4 digits higher. This is because the line-concentration image filter of the first embodiment uses a neighborhood region in which the width is fixed, uses the characteristics of cosine function for evaluating the degree of concentration, discretization of φ and θ, This is because the calculation of the basic addition value has been successfully separated. Since the basic addition value is stored in advance, the number of times of measurement of θ of the luminance gradient vector is at least 1 / M (M is the total number of pixels). Because.

このように実施例1の線集中度画像フィルタとプログラム、線抽出のための画像処理方法は、画像のコントラストや線の幅、ノイズ等の影響を受けずに画像の中の線情報を抽出することができる。また、実施例1は線情報だけを画像から抽出することができるため、1つの画像から、実施例1の線情報を抽出した線画像と一般的な微分による線画像を出力し、両者の差を求めることにより、面領域の境界部分のみの画像を抽出することも可能になる。   As described above, the line concentration degree image filter and the program according to the first embodiment and the image processing method for line extraction extract line information in an image without being affected by the contrast, line width, noise, and the like of the image. be able to. In addition, since only line information can be extracted from an image in the first embodiment, a line image obtained by extracting the line information of the first embodiment and a line image obtained by general differentiation are output from one image, and the difference between the two is output. It is also possible to extract an image of only the boundary portion of the surface area.

本発明の実施例2の線抽出の画像処理方法とプログラム、線集中度画像フィルタについて図11に基づいて説明をする。図11は本発明の実施例2における狭幅固定の近傍領域での線集中の説明図である。実施例2の画像処理は、探索線Sの両翼の近傍領域r,rを分離して、探索線Sからそれぞれ幅wminだけ離れた位置から各方向へ近傍領域r,rを設けるものである。従って、実施例1と基本的に構成が同一であり、詳細な説明は実施例1に譲って、相違点についてのみ説明する。図1,図2,図4〜図10は実施例2においても参照する。 A line extraction image processing method and program, and a line concentration degree image filter according to the second embodiment of the present invention will be described with reference to FIG. FIG. 11 is an explanatory diagram of line concentration in the vicinity of the fixed fixed width in the second embodiment of the present invention. Image processing of the second embodiment separates the neighboring region r 1, r 2 of the wings of the search line S, the search line neighboring region r 1 from the position apart by a width w min respectively from S each direction, r 2 It is provided. Accordingly, the configuration is basically the same as that of the first embodiment, and a detailed description will be given to the first embodiment, and only differences will be described. 1, 2, and 4 to 10 are also referred to in the second embodiment.

実施例2においては、図11に示すように、探索線Sの注目点P(x,y)に対して近傍点Q(x,y)、ベクトル集中線Vと探索線Sまでの幅w、ベクトル集中線V,探索線Sの方向φとし、またP(x,y)の近傍でそれぞれ幅wmin離れた位置から各幅wmax,長さlの分離された狭幅固定の近傍領域r,rを想定して、探索線Sに対する線集中度C(x,y)の計算を行うものである。狭幅固定の近傍領域r,rのV方向単位長の面積をA(r)、A(r)とすると、A(r)=A(r)+A(r)であるから、(数1)で線集中度C(x,y)を表すことができる。このwminは、実画像の場合、wmin=1程度、wmax≧2とすればよい。wmax=2〜5程度で十分である。この場合、実施例2の線抽出の画像処理により、実施例1に比べてさらにノイズ耐性が向上する。 In the second embodiment, as shown in FIG. 11, the points from the attention point P (x, y) on the search line S to the neighboring point Q (x d , y d ), the vector concentration line V, and the search line S. The width w, the vector concentrated line V, and the direction φ of the search line S are set, and each of the widths w max and the length l are separated from a position separated by a width w min in the vicinity of P (x, y). The line concentration C N (x, y) with respect to the search line S is calculated assuming the neighboring regions r 1 and r 2 . When the areas of the V-direction unit lengths of the narrow fixed regions r 1 and r 2 are A 1 (r) and A 2 (r), A (r) = A 1 (r) + A 2 (r). From (Equation 1), the line concentration degree C N (x, y) can be expressed. In the case of a real image, this w min may be about w min = 1 and w max ≧ 2. It is sufficient that wmax = 2 to 5 or so. In this case, noise resistance is further improved by the line extraction image processing of the second embodiment as compared to the first embodiment.

実施例2の線抽出のための画像処理の手順は、実施例1と同様に、(1)基礎加算値演算ステップと、(2)線集中度演算ステップ、(3)線抽出ステップから構成される。実施例2は実施例1と基本的に変わらない。但し、m=Mのときに座標(dx(m),dy(m))が大きく変化する。 Similar to the first embodiment, the image processing procedure for line extraction according to the second embodiment includes (1) a basic addition value calculation step, (2) a line concentration degree calculation step, and (3) a line extraction step. The The second embodiment is basically the same as the first embodiment. However, the coordinates at m = M 1 (dx (m ), dy (m)) is greatly changed.

実施例2のプログラムと線集中度画像フィルタも、図8に示すように、(1)基礎加算値演算ステップの手順を実行する基礎加算値演算手段1、(2)線集中度演算ステップの手順を実行する線集中度演算手段2、(3)線抽出ステップの手順を実行する線抽出手段3から構成される。   As shown in FIG. 8, the program and the line concentration degree image filter of the second embodiment are also (1) basic addition value calculation means 1 for executing the basic addition value calculation step procedure, and (2) the procedure of the line concentration degree calculation step. The line concentration degree calculating means 2 for executing (3), and the line extracting means 3 for executing the procedure of (3) line extraction step.

基礎加算値演算手段3は、近傍領域r,rを構成する各画素の画素数をカウントし、近傍領域r,r内の近傍点Qの位置を計算し、θ,φを離散化し、θ,φのあらゆる変化に対応した基礎加算値(数2)(数3)を演算する。また、線集中度演算手段2は、輝度勾配ベクトルの方向θを計算し、近傍領域r,rのそれぞれで全近傍点Qに対してφが既知の場合、θ,φの値から基礎加算値を単純に加算してC(x,y)を演算する。また、φが未定の場合は離散化したφを変化させ、各φごとに基礎加算値を加算して最大値となるC(x,y)を演算する。これによりφとC(x,y)が求められる。さらに線抽出手段3は、線集中度演算手段2が演算したC(x,y)がノイズ耐性値0.5より大きい値だけを出力し、C(x,y)>0.5の場合、C(x,y)をC(x,y)=1、C(x,y)≦0.5の場合、C(x,y)をC(x,y)=0として、線画像を鮮明にする。 Basic addition value computing means 3 counts the number of pixels of each pixel constituting the neighboring region r 1, r 2, to calculate the position of the neighboring point Q in the neighboring region r 1, r 2, theta, discrete and φ The basic addition values (Equation 2) and (Equation 3) corresponding to all changes in θ and φ are calculated. Further, the line concentration degree calculating means 2 calculates the direction θ of the luminance gradient vector, and when φ is known for all neighboring points Q in each of the neighboring regions r 1 and r 2 , the line concentration degree calculating means 2 calculates the basis from the values of θ and φ. The addition value is simply added to calculate C N (x, y). When φ is not yet determined, the discretized φ is changed, and the basic addition value is added for each φ to calculate C N (x, y) that is the maximum value. Thus, φ and C N (x, y) are obtained. Furthermore, the line extraction means 3 outputs only the value of C N (x, y) calculated by the line concentration degree calculation means 2 that is larger than the noise tolerance value 0.5, and C N (x, y)> 0.5. If, C N (x, y) and C L (x, y) = 1, C N (x, y) when a ≦ 0.5, C N (x, y) and C L (x, y) = As 0, the line image is sharpened.

このようにして演算した実施例2の狭幅固定の近傍領域による線抽出のための画像処理の線集中度の説明を行う。図9の(c)に示す2分割NF線集中度は実施例2の狭幅固定の近傍領域による線集中度を示している。実施例2の狭幅固定の近傍領域による線集中度は、非常に狭い小さな領域(wmax−wmin)、さらにその0.5を越える(wmin+wmax)/2〜wminの僅かな領域だけにノイズの影響があるだけで、影響をほとんど受けず、尾根線だけをシャープに抽出することができることが分る。 A description will be given of the line concentration degree of image processing for line extraction in the vicinity of the narrow fixed area according to the second embodiment. The two-divided NF line concentration shown in (c) of FIG. 9 shows the line concentration in the vicinity of the fixed width in the second embodiment. The degree of line concentration due to the narrow fixed region in Example 2 is a very narrow small region (w max −w min ), and more than 0.5 (w min + w max ) / 2, which is a slight value of 2 to w min . It can be seen that only the ridge line can be extracted sharply, with only the area having the influence of noise and being hardly affected.

実施例1の場合の演算では、近傍領域r,rの端部で他の部分より精度が低下するが、実施例2の場合は近傍領域r,rの端部であまり精度が低下しないという優れた特性がある。また画像上線が交叉しているときに、実施例2の線集中フィルタはこれを高感度に検出することができる。 In the calculation in the first embodiment, the accuracy is lower than the other portions at the end portions of the neighboring regions r 1 and r 2 , but in the second embodiment, the accuracy is much less at the end portions of the neighboring regions r 1 and r 2. There is an excellent characteristic that it does not deteriorate. Further, when the line on the image crosses, the line concentration filter of the second embodiment can detect this with high sensitivity.

本発明は、画像の中の線情報を抽出することができる画像フィルタ、画像処理に適用できる。   The present invention can be applied to an image filter and image processing capable of extracting line information in an image.

(a)本発明の実施例1における実画像の実際の線の説明図、(b)(a)の実際の線の輝度ベクトルの分布説明図(A) Explanatory drawing of actual line of real image in embodiment 1 of the present invention, (b) Illustrated distribution distribution of luminance vector of actual line of (a) (a)本発明の実施例1における線に対する理想的な輝度ベクトルの分布説明図、(b)(a)の線領域の輝度ベクトル説明図(A) Explanatory diagram of distribution of ideal luminance vector with respect to line in embodiment 1 of the present invention, (b) Illustrated explanatory diagram of luminance vector of line area in (a) 本発明の実施例1の狭幅固定の近傍領域での線集中の説明図Explanatory drawing of the line concentration in the near fixed area | region of Example 1 of this invention 本発明の実施例1における線抽出のための画像処理の基礎加算値算出のフローチャートFlowchart of calculation of basic addition value of image processing for line extraction in embodiment 1 of the present invention 本発明の実施例1における線抽出のための画像処理の線集中度算出のフローチャートFlowchart of calculating line concentration degree of image processing for line extraction in embodiment 1 of the present invention 本発明の実施例1における狭幅固定の近傍領域の画素配列説明図Pixel array explanatory diagram of a narrow fixed region in the first embodiment of the present invention 本発明の実施例1における線抽出のための画像処理の線抽出処理のフローチャートFlowchart of line extraction processing of image processing for line extraction in Embodiment 1 of the present invention 本発明の実施例1における線集中フィルタのブロック構成図Block configuration diagram of a line concentration filter in Embodiment 1 of the present invention FA法、実施例1の狭幅固定の近傍領域、実施例2の分離分割した狭幅固定の近傍領域による線集中度の分布図Distribution diagram of the degree of line concentration by the FA method, the narrow fixed neighborhood region of the first embodiment, and the narrow fixed neighborhood region divided and divided by the second embodiment 本発明の実施例1の線集中度画像フィルタと他のフィルタで乳癌の腫瘤影を解析した写真The photograph which analyzed the tumor shadow of the breast cancer with the line concentration degree image filter of Example 1 of this invention and another filter 本発明の実施例2における狭幅固定の近傍領域での線集中の説明図Explanatory drawing of the line concentration in the vicinity area | region of fixed narrowness in Example 2 of this invention (a)線集中度の定義の説明図(b)無限長の線集中ベクトル場での線集中度の説明図(A) Explanatory drawing of definition of line concentration degree (b) Explanatory drawing of line concentration degree in infinite length line concentration vector field GAWHT法に基づくFA集中度による画像処理の線集中度算出のフローチャートFlow chart for calculating the line concentration of image processing based on the FA concentration based on the GAWHT method

符号の説明Explanation of symbols

1 基礎加算値演算手段
2 線集中度演算手段
3 線抽出手段
P(x,y) 注目点
近傍点Q(x,y) 近傍点
S 探索線
V 集中線
θ,φ 方向
,r,r 近傍領域
w,wmax,wmin
W 最大探索幅
(x,y) 線集中度
c(θ-ψ) 集中度
1 Basic addition value computing means 2-wire concentration calculating means 3-wire extracting means P (x, y) point of interest near point Q (x d, y d) neighboring point S search line V concentrate line theta, phi direction r 1, r 2 , r neighboring region w, w max , w min width W maximum search width C N (x, y) line concentration c (θ−ψ) concentration

Claims (9)

画像上の複数の測定点に対して、各測定点を通る探索線の両翼に固定化された一定形状を有する計測用の近傍領域を用意するとともに、該近傍領域内に含まれる複数の近傍点で画像の輝度勾配ベクトルの向きを計測し、前記近傍点のそれぞれで前記輝度勾配ベクトルの向きと前記探索線の方向の差を評価する集中度を計算し、前記近傍領域内のすべての集中度から前記測定点に対する線集中度を計算し、該線集中度が極大になったとき前記探索線に沿って線情報がある旨の判定を行う線抽出のための画像処理方法であって、
前記輝度勾配ベクトルの向きと前記探索線の方向をそれぞれ離散化するとともに、前記近傍領域を示すローカル座標上において予め探索線の方向ごとに前記輝度勾配ベクトルの向きと前記近傍領域内の画素の位置及び画素数とに基づいて各測定点で共用できる前記近傍点の集中度の基礎加算値を計算しておき、
前記輝度勾配ベクトルの向きを計測したときに、前記探索線の方向ごとに前記基礎加算値の候補の中から1つを選んで前記近傍領域内で加算することにより前記基礎加算値に基づいて各測定点の前記線集中度を計算することを特徴とする線抽出のための画像処理方法。
For a plurality of measurement points on the image, a measurement neighborhood area having a fixed shape fixed to both wings of the search line passing through each measurement point is prepared, and a plurality of neighborhood points included in the neighborhood area are prepared. Measuring the direction of the luminance gradient vector of the image, calculating the degree of concentration for evaluating the difference between the direction of the luminance gradient vector and the direction of the search line at each of the neighboring points, and calculating all the degree of concentration in the neighboring region An image processing method for line extraction that calculates a line concentration degree for the measurement point from and determines that there is line information along the search line when the line concentration degree reaches a maximum.
The direction of the brightness gradient vector and the direction of the search line are discretized, respectively, and the direction of the brightness gradient vector and the position of the pixel in the vicinity area for each direction of the search line in advance on local coordinates indicating the vicinity area And calculating a basic addition value of the degree of concentration of the neighboring points that can be shared by each measurement point based on the number of pixels,
When measuring the direction of the luminance gradient vector, each one based on the basic addition value by selecting one of the candidates for the basic addition value for each direction of the search line and adding within the neighborhood region An image processing method for line extraction, wherein the line concentration degree of a measurement point is calculated.
前記近傍領域が、探索線の両翼に沿って所定幅で長尺の固定化された矩形領域であることを特徴とする請求項1記載の線抽出のための画像処理方法。 2. The image processing method for line extraction according to claim 1, wherein the neighboring area is a fixed rectangular area that is long and has a predetermined width along both wings of the search line. 前記近傍領域が、前記探索線の両翼でそれぞれ分離された2つの領域から構成されることを特徴とする請求項1または2に記載の線抽出のための画像処理方法。 The image processing method for line extraction according to claim 1, wherein the neighboring area includes two areas separated by both wings of the search line. 前記集中度c(t)が、c(t)=c(−t)、c(t+π)=−c(t)、c(0)=1、c(π/2)=0を満たし、0≦t≦πにおいて単調減少を示す関数によって計算されることを特徴とする請求項1〜3のいずれかに記載の線抽出のための画像処理方法。 The degree of concentration c (t) satisfies c (t) = c (−t), c (t + π) = − c (t), c (0) = 1, c (π / 2) = 0, and 0 The image processing method for line extraction according to claim 1, wherein the image processing method is calculated by a function indicating monotonic decrease in ≦ t ≦ π. 前記線集中度が0.5より大きい場合に、線情報がある旨の判定を行うことを特徴とする請求項1〜4のいずれかに記載の線抽出のための画像処理方法。 5. The image processing method for line extraction according to claim 1, wherein when the line concentration degree is larger than 0.5, it is determined that there is line information. コンピュータを、
(1)画像上の複数の測定点に対して、各測定点を通る探索線の両翼に計測用の固定化された一定形状の近傍領域を設けるとともに、該近傍領域内に含まれる複数の近傍点の輝度勾配ベクトルの向きと前記探索線の方向をそれぞれ離散化し、前記近傍領域を示すローカル座標上において探索線の方向ごとに前記輝度勾配ベクトルの向きと前記近傍領域内の画素の位置及び画素数とに基づいて各測定点で共用できる前記近傍点の集中度の基礎加算値を計算し格納する基礎加算値演算手段、
(2)前記探索線の方向が設定されたとき、各測定点において前記近傍点のそれぞれで画像の輝度勾配ベクトルの向きを計測し、前記基礎加算値の候補の中から前記向きに対応する基礎加算値を取り出し、前記近傍領域内で加算することにより前記基礎加算値に基づいて線集中度を計算し、各測定点において極大値を示した線集中度の中で線情報と判定できる線集中度及び/またはそのときの探索線の方向を出力する線集中度演算手段、
(3)前記線集中度演算手段が演算した線集中度が所定のノイズ耐性値より大きい場合に線情報と判定する線抽出手段、として機能させるためのプログラム。
Computer
(1) For a plurality of measurement points on the image, a fixed region for measurement having a fixed shape is provided on both wings of a search line passing through each measurement point, and a plurality of neighborhoods included in the vicinity region The direction of the brightness gradient vector of the point and the direction of the search line are discretized, and the direction of the brightness gradient vector, the position of the pixel in the neighborhood area, and the pixel for each direction of the search line on the local coordinates indicating the neighborhood area A basic addition value calculating means for calculating and storing a basic addition value of the degree of concentration of the neighboring points that can be shared by each measurement point based on the number;
(2) When the direction of the search line is set, the direction of the luminance gradient vector of the image is measured at each of the neighboring points at each measurement point, and the basis corresponding to the direction among the candidates for the basic addition value The line concentration is calculated based on the basic addition value by taking out the addition value and adding it in the vicinity region, and the line concentration can be determined as line information in the line concentration degree indicating the maximum value at each measurement point. A line concentration degree calculating means for outputting the degree and / or the direction of the search line at that time,
(3) A program for functioning as line extraction means for determining line information when the line concentration degree calculated by the line concentration degree calculation means is greater than a predetermined noise tolerance value.
前記ノイズ耐性値が0.5であることを特徴とする請求項のプログラム。 The program according to claim 6 , wherein the noise tolerance value is 0.5. 前記基礎加算値演算手段が、前記近傍領域として矩形領域を設定することを特徴とする請求項6または7に記載のプログラム。 The program according to claim 6 or 7, wherein the basic addition value calculation means sets a rectangular area as the neighboring area. (1)画像上の複数の測定点に対して、各測定点を通る探索線の両翼に計測用の固定化された一定形状の近傍領域を設けるとともに、該近傍領域内に含まれる複数の近傍点の輝度勾配ベクトルの向きと前記探索線の方向をそれぞれ離散化し、前記近傍領域を示すローカル座標上において探索線の方向ごとに前記輝度勾配ベクトルの向きと前記近傍領域内の画素の位置及び画素数とに基づいて各測定点で共用できる前記近傍点の集中度の基礎加算値を計算し格納する基礎加算値演算手段と、
(2)前記探索線の方向が設定されたとき、各測定点において前記近傍点のそれぞれで画像の輝度勾配ベクトルの向きを計測し、前記基礎加算値の候補の中から前記向きに対応する基礎加算値を取り出し、前記近傍領域内で加算することにより前記基礎加算値に基づいて線集中度を計算し、各測定点において極大値を示した線集中度の中で線情報と判定できる線集中度及び/またはそのときの探索線の方向を出力する線集中度演算手段と、
(3)前記線集中度演算手段が演算した線集中度が所定のノイズ耐性値より大きい場合に線情報と判定する線抽出手段と、を備えたことを特徴とする線集中度画像フィルタ。
(1) For a plurality of measurement points on the image, a fixed region for measurement having a fixed shape is provided on both wings of a search line passing through each measurement point, and a plurality of neighborhoods included in the vicinity region The direction of the brightness gradient vector of the point and the direction of the search line are discretized, and the direction of the brightness gradient vector, the position of the pixel in the neighborhood area, and the pixel for each direction of the search line on the local coordinates indicating the neighborhood area Basic addition value calculation means for calculating and storing a basic addition value of the degree of concentration of the neighboring points that can be shared by each measurement point based on the number;
(2) When the direction of the search line is set, the direction of the luminance gradient vector of the image is measured at each of the neighboring points at each measurement point, and the basis corresponding to the direction among the candidates for the basic addition value The line concentration is calculated based on the basic addition value by taking out the addition value and adding it in the vicinity region, and the line concentration can be determined as line information in the line concentration degree indicating the maximum value at each measurement point. Line concentration degree calculating means for outputting the degree and / or the direction of the search line at that time,
(3) A line concentration degree image filter comprising line extraction means for determining line information when the line concentration degree calculated by the line concentration degree calculation means is greater than a predetermined noise tolerance value.
JP2004097420A 2004-03-30 2004-03-30 Image processing method and program for line extraction, line concentration image filter Expired - Fee Related JP4544891B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2004097420A JP4544891B2 (en) 2004-03-30 2004-03-30 Image processing method and program for line extraction, line concentration image filter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2004097420A JP4544891B2 (en) 2004-03-30 2004-03-30 Image processing method and program for line extraction, line concentration image filter

Publications (2)

Publication Number Publication Date
JP2005284697A JP2005284697A (en) 2005-10-13
JP4544891B2 true JP4544891B2 (en) 2010-09-15

Family

ID=35183009

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2004097420A Expired - Fee Related JP4544891B2 (en) 2004-03-30 2004-03-30 Image processing method and program for line extraction, line concentration image filter

Country Status (1)

Country Link
JP (1) JP4544891B2 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4951758B2 (en) * 2006-12-11 2012-06-13 国立大学法人九州大学 Program for removing anisotropic noise and anisotropic noise removing method
JP4951757B2 (en) * 2006-12-11 2012-06-13 国立大学法人九州大学 Blood vessel information analyzer
WO2011083749A1 (en) 2010-01-06 2011-07-14 日本電気株式会社 Learning device, identification device, learning identification system and learning identification device
JP5821695B2 (en) * 2012-02-29 2015-11-24 株式会社島津製作所 Image processing apparatus and radiation tomography apparatus including the same
JP6273291B2 (en) * 2012-12-03 2018-01-31 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Image processing apparatus and method
JP6307873B2 (en) * 2013-12-24 2018-04-11 富士通株式会社 Object line detection apparatus, method, and program
KR101617535B1 (en) * 2014-08-11 2016-05-04 경희대학교 산학협력단 Method for estimating foggy image based on transmission rate

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05167927A (en) * 1991-12-12 1993-07-02 Toshiba Corp Image processor
JPH1099305A (en) * 1996-09-30 1998-04-21 Fuji Photo Film Co Ltd Method and device for detecting abnormal shadow candidate

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05167927A (en) * 1991-12-12 1993-07-02 Toshiba Corp Image processor
JPH1099305A (en) * 1996-09-30 1998-04-21 Fuji Photo Film Co Ltd Method and device for detecting abnormal shadow candidate

Also Published As

Publication number Publication date
JP2005284697A (en) 2005-10-13

Similar Documents

Publication Publication Date Title
Yao et al. Curvature aided Hough transform for circle detection
JP4999163B2 (en) Image processing method, apparatus, and program
Wang et al. Evaluating edge detection through boundary detection
US8396285B2 (en) Estimating vanishing points in images
AU2015283079A1 (en) Detecting edges of a nucleus using image analysis
CN104619257A (en) System and method for automated detection of lung nodules in medical images
JP2014057306A (en) Document image binarization and segmentation using image phase congruency
CN103440644A (en) Multi-scale image weak edge detection method based on minimum description length
Hussain et al. A comparative analysis of edge detection techniques used in flame image processing
CN104838422A (en) Image processing device and method
Zhang et al. Efficient system of cracking-detection algorithms with 1-mm 3D-surface models and performance measures
Cheng et al. Stereo matching by using the global edge constraint
JP2009211138A (en) Target area extraction method, device and program
Reddy et al. Comparative analysis of common edge detection algorithms using pre-processing technique
CN112801031A (en) Vein image recognition method and device, electronic equipment and readable storage medium
Morard et al. Parsimonious path openings and closings
JP4544891B2 (en) Image processing method and program for line extraction, line concentration image filter
Luo et al. Saliency-based geometry measurement for image fusion performance
Higgins et al. Edge detection using two-dimensional local structure information
Janowczyk et al. Quantifying local heterogeneity via morphologic scale: Distinguishing tumoral from stromal regions
KR101126223B1 (en) Liver segmentation method using MR images
Li et al. Graph network refining for pavement crack detection based on multiscale curvilinear structure filter
Sari et al. A Combination of K-Means and Fuzzy C-Means for Brain Tumor Identification
Liu et al. Segmentation refinement of small-size juxta-pleural lung nodules in CT scans
Sahu et al. Digital image texture classification and detection using radon transform

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20070123

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20091210

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20091218

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20100204

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20100331

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20100401

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20100624

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20100629

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130709

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Ref document number: 4544891

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130709

Year of fee payment: 3

RD02 Notification of acceptance of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: R3D02

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

LAPS Cancellation because of no payment of annual fees