JPS60169977A - Segmenting method of picture area - Google Patents

Segmenting method of picture area

Info

Publication number
JPS60169977A
JPS60169977A JP59025035A JP2503584A JPS60169977A JP S60169977 A JPS60169977 A JP S60169977A JP 59025035 A JP59025035 A JP 59025035A JP 2503584 A JP2503584 A JP 2503584A JP S60169977 A JPS60169977 A JP S60169977A
Authority
JP
Japan
Prior art keywords
picture
image
region
intensity
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP59025035A
Other languages
Japanese (ja)
Inventor
Fuminobu Furumura
文伸 古村
Nobuo Hamano
浜野 亘男
Yuichi Kitatsume
北爪 友一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to JP59025035A priority Critical patent/JPS60169977A/en
Publication of JPS60169977A publication Critical patent/JPS60169977A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10008Still image; Photographic image from scanner, fax or copier
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20101Interactive definition of point of interest, landmark or seed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE:To extract a specific area in a picture rapidly without any specific skill by allowing a computer to detect the border of a noticed area highly accurately in accordance with human's simple instruction. CONSTITUTION:An objective picture 14 is AD converted by a picture input device 15 such as a drum scanner and stored in a storage device 18 through a central processor 16. The inputted picture data are displayed on a display device 19. While observing the display, the operator operates a coordinate input device 20 such as a track ball to designate plural representative points on an outline of the noticed area in the objective picture. The picture data 83 are stored in a picture memory 23 and displayed on a CRT monitor 26 through a DA converter 25. In this case, a cursor is displayed so as to be superposed to the picture. Receiving a command, a cursor generating device 24 sends a coordinate value indicating the cursor position to the central processor 16. Thus, picture coordinates of the plural representative points on the outline are successively applied to the central processor 16 by the operator's operation.

Description

【発明の詳細な説明】 〔発明の利用分野〕 本発明は写真等画像の計算機処理に係り、特に画像中の
特定領域を計算機と人間との簡単な対話処理により迅速
かつ精度よく抽出するのに好適な画像領域切出し方式に
関する。
[Detailed Description of the Invention] [Field of Application of the Invention] The present invention relates to computer processing of images such as photographs, and in particular to extraction of a specific area in an image quickly and accurately through simple interaction processing between a computer and a human. This invention relates to a suitable image area extraction method.

〔発明の背景〕[Background of the invention]

従来写真等画像中の特定領域たとえば人物のみを切出し
、残りの領域たとえば背景を除去する処理は、印画紙か
ら所望の部分をナイフ等で切り取る。あるいは筆と絵の
具を用いて背景部分を塗りつぶす等の方法で行なわれて
いた。そのため熟練を要する。tた処理に多大の人手、
時間を必要とするという欠点があった。そこで画像をデ
ィジタル化し、領域抽出処理を計算機を用いて行なおう
とする幾つかの試みがある。第1の方Aは計算機に画像
ディスプレイ装置を接続し、そこに対象画像を表示し、
人間がトラックボール、ジョイスティック等の座標入力
装置を用いてディスプレイ上に画像に重ね合わせて表示
されたカーソルを、画像中の着目領域と背景との境界に
沿って移動させることにより境界線の座標を入力し、計
算機がこの座標データを用いて着目領域を切出し、背景
を消去する方法である。この方法では着目領域の輪郭を
正確に指示、入力するのに時間と熟練を要するという欠
点がある。第2の方法は着目領域の抽出を計算機で自動
的に行なわせるもので、着目領域と背景との濃淡レベル
差、あるいは色彩差を利用して、対象画像の各画素の強
度値を検査し、該領域の内部であるか否かを判定するも
のである。
Conventionally, in the process of cutting out only a specific area, such as a person, from an image such as a photograph and removing the remaining area, such as the background, a desired portion is cut out from photographic paper using a knife or the like. Alternatively, this was done by filling in the background using a brush and paint. Therefore, skill is required. A large amount of manpower is required to process the
The drawback was that it required time. Therefore, several attempts have been made to digitize images and perform area extraction processing using computers. The first person A connects an image display device to the computer, displays the target image there,
A person uses a coordinate input device such as a trackball or joystick to move a cursor superimposed on the image on the display along the boundary between the area of interest in the image and the background, thereby determining the coordinates of the boundary line. In this method, the computer uses this coordinate data to cut out the region of interest and erase the background. This method has the disadvantage that it requires time and skill to accurately indicate and input the outline of the region of interest. The second method is to have a computer automatically extract the region of interest, which examines the intensity value of each pixel in the target image using the difference in shading level or color difference between the region of interest and the background. This is to determine whether or not the area is inside the area.

しかし対象領域内で濃淡レベルあるいは色彩が十分一様
である場合を除き領域の境界が精度よく検出されないと
いう欠点がある。第3の方法は着目領域と背景の境界部
分での画素強度の急激な変化を検出するため、画像全体
について画素強度の空間微分を行ない、微係数の絶対値
がある閾値より大きな画素を境界の候補として検出する
方法である。しかしこの方法では着目境界線以外に、着
目領域内部および背景内部にある濃淡レベルの不連続線
も検出されるため、この処理結果から所望の境界線だけ
を抽出するという処理が更に必要となる。
However, this method has the disadvantage that the boundaries of the region cannot be detected with high precision unless the gray level or color is sufficiently uniform within the target region. In the third method, in order to detect sudden changes in pixel intensity at the boundary between the region of interest and the background, the pixel intensity is spatially differentiated for the entire image, and pixels whose absolute value of differential coefficient is greater than a certain threshold are detected at the boundary. This is a method of detecting it as a candidate. However, in this method, in addition to the boundary line of interest, discontinuous lines of gray level within the region of interest and within the background are also detected, so it is additionally necessary to extract only the desired boundary line from this processing result.

以上のごと〈従来の方法では1画像中の特定領域を簡単
かつ迅速に抽出することが因業であった。
As mentioned above, in conventional methods, the task was to easily and quickly extract a specific area in one image.

【発明の目的〕[Purpose of the invention]

本発明の目的は、人間の簡単な指示に従って計算機が高
精度に着目領域の境界を検出することにより、熟練なし
に迅速に画像中の特定領域抽出を可能とする方式を提供
することにある。
An object of the present invention is to provide a method that allows a computer to detect the boundaries of a region of interest with high precision according to simple instructions from a human, thereby enabling rapid extraction of a specific region in an image without any skill.

〔発明の概要〕 上記目的を達成するため本発明ではまず人手により大略
の輪郭部線を指定し、この輪郭線近傍における画像強度
の変化率から正確な輪郭線を検出する点に特徴がある。
[Summary of the Invention] In order to achieve the above object, the present invention is characterized in that a rough contour line is first specified manually, and an accurate contour line is detected from the rate of change in image intensity in the vicinity of this contour line.

〔発明の実施例〕[Embodiments of the invention]

ここで本発明で用いる画像中の領域境界検出の原理につ
いて説明する。第1図が対象とする原画像とする。画像
1は夫々が強度値を持つ画素の2次元的配列である。こ
の中から着目領域として人物2の領域を抽出し、他の背
景領域3を除去する場合を考える。第2図は所望の処理
結果として得べき画像で、画像4のうち人物の領域2は
原画像と同一であるが、残る背景領域5は各画素の強度
を所定強度でおきかえたものである。すなわち原画像中
の背景3は一様な濃淡強度で塗りつぶされるとする。こ
の特定領域抽出、背景除去を実現するため本発明では第
3図のごとく原画像1中の。
Here, the principle of area boundary detection in an image used in the present invention will be explained. The original image shown in FIG. 1 is the target image. Image 1 is a two-dimensional array of pixels, each having an intensity value. Consider a case where the region of the person 2 is extracted as the region of interest from among these and the other background region 3 is removed. FIG. 2 shows an image that should be obtained as a result of the desired processing.A region 2 of the person in the image 4 is the same as the original image, but the remaining background region 5 has the intensity of each pixel replaced by a predetermined intensity. That is, it is assumed that the background 3 in the original image is filled with uniform shading intensity. In order to realize this specific area extraction and background removal, the present invention extracts a specific area from the original image 1 as shown in FIG.

対象着目領域と背景との境界である輪郭線6をまず抽出
する。この輪郭線が得られれば、これを用いて原画像の
各画素が輪郭線6の内部にあるか否かを判定し、内部で
あれば原画像1の対応する画素の強度を、また外部であ
れば与えられた背景用強度を、出力画像の画素強度とす
ることにより所望の画像4が得られる。
First, a contour line 6, which is the boundary between the target region of interest and the background, is extracted. Once this contour line is obtained, it is used to determine whether each pixel of the original image is inside the contour line 6, and if it is inside, the intensity of the corresponding pixel of the original image 1 is determined. A desired image 4 can be obtained by setting the given background intensity, if any, as the pixel intensity of the output image.

次に本発明における輪郭線6の抽出方法について説明す
る。第4図に示すごとく原画像1の中で着目領域の輪郭
上の代表点を複数個人間が指示する。これを7−1.7
−2.・・・7−n(全体でn個)とする、この指示は
以下に述べる計算機による輪郭線自動抽出処理の入力と
なるものであり、必ずしも高い指示精度は要求されない
、計算機による自動抽出処理ではまず上記指示された代
表点の連続するものの間を直線8で結ぶ。これを粗輪郭
線と呼ぶ。次にとの粗輪郭線8上の各画素について以下
の処理を行なう6代表点9を例に説明する0点9におい
て粗輪郭線8に直交する検査線分10を考える。線分1
0の端点11,12間の画素数量は所定の値をとる。こ
の線分10の上で各画素の強度を次のようにして検査し
、正しい輪郭点すなわち着目領域と背景との境界線上に
ある点13を検出する。第5図は線分10上の画素の強
度変化を示す、この例では着目領域51の画素強度が大
きく(すなわち明るく)、背景領域52の強度が小さい
(すなわち暗い)、シたがって2領域の境界点は画素強
度から点13であることがわかる。この点は第6図に示
すごとく、第5図の強度変化の線分10方向の微分値の
絶対値が最大となる点としてめられる。
Next, a method for extracting the contour line 6 according to the present invention will be explained. As shown in FIG. 4, a plurality of individuals designate representative points on the outline of the region of interest in the original image 1. This is 7-1.7
-2. ...7-n (n total). This instruction is the input for the computer-based automatic contour extraction process described below, and high instruction accuracy is not necessarily required for the computer-based automatic extraction process. First, a straight line 8 connects consecutive representative points indicated above. This is called a rough contour line. Next, consider an inspection line segment 10 orthogonal to the rough contour line 8 at point 0 9, which will be explained using six representative points 9 as an example, for which the following processing is performed for each pixel on the rough contour line 8. line segment 1
The number of pixels between the zero end points 11 and 12 takes a predetermined value. The intensity of each pixel on this line segment 10 is inspected as follows, and a correct contour point, that is, a point 13 on the boundary line between the region of interest and the background is detected. FIG. 5 shows the intensity changes of pixels on the line segment 10. In this example, the pixel intensity of the region of interest 51 is large (that is, bright), and the intensity of the background region 52 is small (that is, dark). It can be seen from the pixel intensity that the boundary point is point 13. As shown in FIG. 6, this point is taken as the point where the absolute value of the differential value of the intensity change in the direction of line segment 10 in FIG. 5 is maximum.

以上の処理を粗輪郭線8上の各画素点9について行なっ
て得られた点13をつなぐことにより所望の輪郭線が得
られる。
By performing the above processing for each pixel point 9 on the rough contour line 8 and connecting the obtained points 13, a desired contour line can be obtained.

以下、本発明の一実施例を図を用いて説明する。An embodiment of the present invention will be described below with reference to the drawings.

第7図は本発明による画像処理を行なう装置の全体構成
図である。対象画像14はドラムスキャナ等の画像入力
装置15によりAD変換されたのち中央処理装置16を
介して記憶装置!18にたくわえられる。入力された画
像データはディスプレイ装[19に表示される。操作者
である人間はこの表示を見ながら、トラックボール等の
座標入力装置20を操作し対象画像中の着目領域の輪郭
線の複数の代表点を指示する。この指示は必ずしも正確
でなくてもよい、この動作は第8図に示す構成のディス
プレイ装置!19により実現される。すなわち座標入力
装置20により入力されたPIieA81と指令82に
従い、カーソル発生装置24は。
FIG. 7 is an overall configuration diagram of an apparatus for performing image processing according to the present invention. The target image 14 is AD converted by an image input device 15 such as a drum scanner, and then sent to a storage device via a central processing unit 16! Stored at 18. The input image data is displayed on the display device [19]. While viewing this display, the human operator operates the coordinate input device 20, such as a trackball, to indicate a plurality of representative points on the outline of the region of interest in the target image. This instruction does not necessarily have to be accurate; this operation is performed by a display device having the configuration shown in FIG. This is realized by 19. That is, the cursor generating device 24 follows the PIieA 81 and the command 82 input by the coordinate input device 20.

DA変換装置25を介し、CRTモニタ26の指示され
た座標上にカーソルを表示する。
A cursor is displayed on the designated coordinates of the CRT monitor 26 via the DA converter 25.

一方、中央処理装置16から送られてきた画像データ8
3は画像メモリ23に格納され、DA変換装置25によ
りCRTモニタ26に表示される。
On the other hand, image data 8 sent from the central processing unit 16
3 is stored in the image memory 23 and displayed on the CRT monitor 26 by the DA converter 25.

このときカーソルは画像主龜重ねて表示される。At this time, the cursor is displayed overlapping the main image.

操作者はモニタ23上の表示を見て座標入力装置20を
操作してカーソルを所望の位置に合わせた時の座標入力
装置20に装着した指令スイッチ27をオンにする。す
ると指令82がディスプレイ装置19に送られる。カー
ソル発生装置24はその指令を受けると、カーソル位置
を示す座標値を中央処理装置16に送る。このように次
々に人間の操作により輪郭線の複数の代表点の画像座標
が中央処理装置16に与えられる。
The operator looks at the display on the monitor 23, operates the coordinate input device 20, and turns on the command switch 27 attached to the coordinate input device 20 when the cursor is positioned at a desired position. A command 82 is then sent to the display device 19. Upon receiving the command, the cursor generator 24 sends coordinate values indicating the cursor position to the central processing unit 16. In this manner, the image coordinates of a plurality of representative points of the contour are sequentially provided to the central processing unit 16 through human operations.

次にこれらの座標を用いて中央処理袋!!16は第9図
に示す処理を行ない画像中の着目領域の輪郭線を自動抽
出する。まず、ステップ91では。
Then use these coordinates to centrally process the bag! ! 16 performs the processing shown in FIG. 9 to automatically extract the outline of the region of interest in the image. First, in step 91.

上記指示された代表点の連続するものを直線で結んで粗
輪郭線を作る。ステップ92では粗輪郭線上の全画素に
ついて強度の検査を終了したか否か判定し、終了してな
い場合はステップ93でこの粗輪郭線上の各画素につい
て該線に直交する検査線分を設定する。この線分の長さ
はΩ画素(Ωは10程度)とする。ステップ94では、
この線分に沿って各画素の強度の微分値(変化率)をめ
る。ステップ95では、この微分値の絶対値の最大点を
上記検査線分上の輪郭点とする0以上の処理を粗輪郭線
上の各画素について行ない、得られた輪郭点をつなげば
所望の輪郭線を得る。
A rough contour line is created by connecting consecutive representative points indicated above with straight lines. In step 92, it is determined whether the intensity test has been completed for all pixels on the rough contour line, and if it has not been completed, in step 93, a test line segment perpendicular to the line is set for each pixel on the rough contour line. . The length of this line segment is assumed to be Ω pixels (Ω is about 10). In step 94,
The differential value (rate of change) of the intensity of each pixel is calculated along this line segment. In step 95, a process of 0 or more is performed for each pixel on the coarse contour line, in which the maximum point of the absolute value of this differential value is set as the contour point on the inspection line segment, and by connecting the obtained contour points, a desired contour line is formed. get.

次にこの輪郭線データを用いて中央処理装置16は対象
画像の各画素について着目領域の内部であるか否かを判
定する。内部の画素については原画像の画素強度をその
まま出力画像の画素強度とする。外部と判定された画素
については、操作者がコンソール17から設定した背景
領域の画素強度値を出力画像の当該画素の強度とする。
Next, using this contour line data, the central processing unit 16 determines whether each pixel of the target image is inside the region of interest. For internal pixels, the pixel intensity of the original image is directly used as the pixel intensity of the output image. For pixels determined to be external, the pixel intensity value of the background area set by the operator from the console 17 is set as the intensity of the pixel in the output image.

これにより背景除去された所望の出力画像データが得ら
れる。このデータはディスプレイ装w19に送って表示
しまた、フィルムライタ等の画像出力装[21を用いて
写真1紙等に焼付は出力画像22として利用される。
As a result, desired output image data from which the background has been removed can be obtained. This data is sent to a display device w19 for display, and is used as an output image 22 by printing a photo onto a sheet of paper using an image output device such as a film writer [21].

以上の動作で画像中の特定領域抽出が可能であるが、画
像によっては輪郭線抽出時に画像のノイズの影響を受け
、第10図のごとく得られた輪郭線101が高周波の凸
凹を持つ場合が考えられる。
Although it is possible to extract a specific area in an image with the above operation, depending on the image, the contour line extraction may be affected by noise in the image, and the obtained contour line 101 may have high-frequency irregularities as shown in Figure 10. Conceivable.

その際には次のいずれかの処理をとればよい。In that case, you can take one of the following steps.

(1)第9図のステップ94(検査線分上の画素強度の
微分処理)の前に、平滑化を行なう。いま該線分上の画
素のインデックスをi (i=1゜・・・、11)とし
、強度をx (i)とする。このとき単純な微分値は d(i)−x(i) −xci−1) −(1)である
が、これの代りにまず平滑値x (i)を但し、w(k
) (k= J −m+1.−、 m)は重みによりめ
、 d(i)=x(i) x(i−1) ・・・(3)をも
って微分値とする。
(1) Smoothing is performed before step 94 in FIG. 9 (differentiation processing of pixel intensities on inspection line segments). Now let the index of the pixel on the line segment be i (i=1°..., 11) and the intensity be x (i). At this time, the simple differential value is d(i)-x(i)-xci-1)-(1), but instead of this, we first take the smoothed value x(i), but w(k
) (k=J-m+1.-, m) is determined by weight, and d(i)=x(i) x(i-1) (3) is used as a differential value.

(2)得られた輪郭線101を2次元フーリエ変換しそ
の高周波成分を除いたのち、逆フーリエ変換する。
(2) The obtained contour line 101 is subjected to two-dimensional Fourier transform to remove its high frequency components, and then subjected to inverse Fourier transform.

(3)得られた輪郭線101を区分的に滑らかな曲線に
最小2乗あてほめを行なう0例えば座標(xt y)を
、 で表わし、係数an t bn (n=o、1g・・・
(3) Perform least square fitting of the obtained contour line 101 to a piecewise smooth curve. For example, the coordinates (xt y) are expressed as, and the coefficient an t bn (n=o, 1g...
.

N)を最小2乗法でめる。N) using the least squares method.

以上の方法で滑らかな輪郭線が得られる。A smooth contour line can be obtained using the above method.

一方1人物の頭髪のごとく輪郭をぼかしたい場合には次
のように行なえばよい、すなわち、本実施例の方法でめ
た輪郭部分の各画素点を中心にローパスフィルタをかけ
る。領域切出し後の画素(it j)の強度をx(l、
j)とするとき、輪郭線近傍の画素(1’ # J’ 
)についてM 阿 X(1’ # J’ )= Σ Σ w(kJ)x(i
’ −に、j’ −n)k=−M ん= −M ・・・(5) 但Is、 w (k e Ω)は重み なるマスク処理を行なる。
On the other hand, if it is desired to blur the outline, such as the hair of a person, the following procedure can be used: In other words, a low-pass filter is applied to each pixel point of the outline determined by the method of this embodiment. The intensity of the pixel (it j) after region extraction is x(l,
j), the pixel near the contour line (1'#J'
) for M AX(1'#J') = Σ Σ w(kJ)x(i
'-, j'-n)k=-Mn=-M...(5) However, Is, w (k e Ω) performs a masking process which is a weight.

以上述べたごとく本実施例によれば、簡単な人間の操作
で画像中の特定領域の自動抽出が可能となる。
As described above, according to this embodiment, it is possible to automatically extract a specific area in an image with a simple human operation.

〔発明の効果〕〔Effect of the invention〕

本発明によれば、大まかに人間が指定した輪郭線を用い
て画像中の特定領域を計算機が自動的かつ高精度に切出
すことができるので、従来熟練者のみが行なえた背景除
去等の写真画像処理のコスト低減効果がある。
According to the present invention, a computer can automatically and highly accurately cut out a specific area in an image using outlines roughly specified by a human, so background removal, etc., which could only be done by an experienced person in the past, can be done in photos. This has the effect of reducing the cost of image processing.

また°画像切出し機能は、画像の拡大・縮小等の操作機
能2編集機能と合わせ、計算機による文書画像処理の有
効性を高める効果がある。
In addition, the image cutting function, together with the operation function 2 editing function such as image enlargement/reduction, has the effect of increasing the effectiveness of document image processing by a computer.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は処理対象画像、第2図は背景を除去した出力画
像、第3図は第1図の人物輪郭線図、第4図は輪郭線の
自動抽出方式を示す図、第5図は第4図の線分10上の
画素強度分布図、第6図は同線分10上の画素強度の変
化率を示す図、第7図は本発明による画像処理装置の全
体構成図、第8図は該装置を構成する画像ディスプレイ
装置の構成図、第9図は輪郭線検出の処理フローチャー
ト、第10図は輪郭線抽出結果を示す図である。 芽4 区 l6 /6 竿9図
Figure 1 is the image to be processed, Figure 2 is the output image with the background removed, Figure 3 is the contour of the person in Figure 1, Figure 4 is a diagram showing the automatic contour extraction method, and Figure 5 is the output image with the background removed. FIG. 4 is a pixel intensity distribution diagram on the line segment 10, FIG. 6 is a diagram showing the rate of change in pixel intensity on the same line segment 10, FIG. 7 is an overall configuration diagram of the image processing apparatus according to the present invention, and FIG. 9 is a block diagram of an image display device constituting the apparatus, FIG. 9 is a processing flowchart of contour line detection, and FIG. 10 is a diagram showing the result of contour line extraction. Bud 4 Ward l6 /6 Rod 9 figure

Claims (1)

【特許請求の範囲】[Claims] 画像中の着目領域について粗く指定された輪郭線を用い
て画像強度の変化率から該領域の正確な輪郭線を検出す
ることを特徴とする画像領域切出し方式。
An image region cutting method characterized in that an accurate contour line of a region of interest in an image is detected from a rate of change in image intensity using a roughly specified contour line.
JP59025035A 1984-02-15 1984-02-15 Segmenting method of picture area Pending JPS60169977A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP59025035A JPS60169977A (en) 1984-02-15 1984-02-15 Segmenting method of picture area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP59025035A JPS60169977A (en) 1984-02-15 1984-02-15 Segmenting method of picture area

Publications (1)

Publication Number Publication Date
JPS60169977A true JPS60169977A (en) 1985-09-03

Family

ID=12154651

Family Applications (1)

Application Number Title Priority Date Filing Date
JP59025035A Pending JPS60169977A (en) 1984-02-15 1984-02-15 Segmenting method of picture area

Country Status (1)

Country Link
JP (1) JPS60169977A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63106872A (en) * 1986-10-24 1988-05-11 Canon Inc Image processor
US5990930A (en) * 1997-07-28 1999-11-23 Sharp Kabushiki Kaisha Image-area extracting method for visual telephone
US8994815B2 (en) 2010-01-22 2015-03-31 Hitachi High—Technologies Corporation Method of extracting contour lines of image data obtained by means of charged particle beam device, and contour line extraction device
US9702695B2 (en) 2010-05-27 2017-07-11 Hitachi High-Technologies Corporation Image processing device, charged particle beam device, charged particle beam device adjustment sample, and manufacturing method thereof

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63106872A (en) * 1986-10-24 1988-05-11 Canon Inc Image processor
US5990930A (en) * 1997-07-28 1999-11-23 Sharp Kabushiki Kaisha Image-area extracting method for visual telephone
US8994815B2 (en) 2010-01-22 2015-03-31 Hitachi High—Technologies Corporation Method of extracting contour lines of image data obtained by means of charged particle beam device, and contour line extraction device
US9702695B2 (en) 2010-05-27 2017-07-11 Hitachi High-Technologies Corporation Image processing device, charged particle beam device, charged particle beam device adjustment sample, and manufacturing method thereof

Similar Documents

Publication Publication Date Title
Adhikari et al. Image-based retrieval of concrete crack properties
JP2005523053A5 (en)
CN112991159B (en) Face illumination quality evaluation method, system, server and computer readable medium
JPS60169977A (en) Segmenting method of picture area
JP6527765B2 (en) Wrinkle state analyzer and method
JP4082718B2 (en) Image recognition method, image display method, and image recognition apparatus
RU2614545C1 (en) Device for merging medical images
JP3661073B2 (en) Imaging parameter measuring method and apparatus, and recording medium
US5146311A (en) Method of indentifying and quantifying oxides on rolled metal strip
JPH0443204B2 (en)
CA2230197C (en) Strand orientation sensing
García et al. Improved revealing of hidden structures and defects for historic art sculptures using poisson image editing
JP3047017B2 (en) Image processing method
JP2534552B2 (en) Image processing device
JPH0614339B2 (en) Image correction method
CN112884694B (en) Defect detection method, device, equipment and medium for flat display panel
KR20030063672A (en) Method for detecting a crack of pavement
JPH09245166A (en) Pattern matching device
JP2000182056A (en) Picture processor
Mueller et al. Registration of transportation damages using a high-resolution CCD camera
JPH07220095A (en) Extracting device for image of object
KR100351129B1 (en) System for Examination printing inferior using Circumference Mask
JP3197267B2 (en) Image processing device
JP3595015B2 (en) Image center of gravity detection method
JPH08111817A (en) Shadow detector