JP2016122367A5 - - Google Patents
Download PDFInfo
- Publication number
- JP2016122367A5 JP2016122367A5 JP2014262659A JP2014262659A JP2016122367A5 JP 2016122367 A5 JP2016122367 A5 JP 2016122367A5 JP 2014262659 A JP2014262659 A JP 2014262659A JP 2014262659 A JP2014262659 A JP 2014262659A JP 2016122367 A5 JP2016122367 A5 JP 2016122367A5
- Authority
- JP
- Japan
- Prior art keywords
- contour
- image
- area
- foreground
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims description 40
- 238000000605 extraction Methods 0.000 claims description 15
- 238000003708 edge detection Methods 0.000 claims description 11
- 230000000875 corresponding Effects 0.000 claims description 7
- 238000003672 processing method Methods 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 2
- 230000004931 aggregating Effects 0.000 claims 1
- 238000000638 solvent extraction Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 7
- 230000011218 segmentation Effects 0.000 description 5
- 230000000052 comparative effect Effects 0.000 description 3
- 239000000123 paper Substances 0.000 description 3
- 239000011089 white board Substances 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 2
- 210000003800 Pharynx Anatomy 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006011 modification reaction Methods 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
Images
Description
しかしながら、上述した特許文献の技術にあっては、抽出対象の一点をタップ指定するだけで、セグメンテーション処理が実行可能となるためにユーザの負担を大幅に軽減することが可能となるが、例えば、抽出対象が複雑な形状の場合や、抽出候補が複数ある場合など、抽出対象の状態によっては、その抽出対象の全体を正しく領域内に収めることができない場合があり、画像内を背景画素と前景画素に分割する処理に悪影響を及ぼしてしまう虞があった。 However, in the technique of the above-described patent document, it is possible to significantly reduce the burden on the user because the segmentation process can be executed by simply specifying one point of the extraction target. Depending on the state of the extraction target, such as when the extraction target has a complicated shape or when there are multiple extraction candidates, the entire extraction target may not fit correctly in the area, and the background pixels and foreground are displayed in the image. There is a risk of adversely affecting the process of dividing into pixels.
前記課題を解決するために本発明の画像処理装置の一態様は、
処理対象の画像に含まれる複数の領域候補の各々を区分する複数の輪郭を抽出する輪郭抽出手段と、
前記輪郭抽出手段によって抽出された前記複数の輪郭の中から所定の輪郭状態を持った前記輪郭を選択する選択手段と、
前記選択手段によって選択された前記輪郭に基づいて、前記処理対象の画像を前景領域、背景領域、境界領域に区分する区分手段と、
前記区分手段によって区分された前記処理対象の画像に対して、前記境界領域に含まれる境界画素を前記前景領域に含まれる前景画素又は前記背景領域に含まれる背景画素のいずれかに確定して、前記境界領域を前記前景領域と前記背景領域とに分割する所定の画像分割処理を行う分割処理手段と、
を備える、
ことを特徴とする。
また、前記課題を解決するため、本発明の画像処理法の一態様は、
画像処理装置における画像処理方法であって、
処理対象の画像に含まれる複数の領域候補の各々を区分する複数の輪郭を抽出する処理と、
抽出された前記複数の輪郭の中から所定の輪郭状態を持った前記輪郭を選択する処理と、
選択された前記輪郭に基づいて、前記処理対象の画像を前景領域、背景領域、境界領域に区分する処理と、
区分された前記処理対象の画像に対して、前記境界領域に含まれる境界画素を前記前景領域に含まれる前景画素又は前記背景領域に含まれる背景画素のいずれかに確定して、前記境界領域を前記前景領域と前記背景領域とに分割する所定の画像分割処理を行う処理と、
を含む、
ことを特徴とする。
また、前記課題を解決するため、本発明のプログラムの一態様は、
画像処理装置のコンピュータに対して、
処理対象の画像に含まれる複数の領域候補の各々を区分する複数の輪郭を抽出する機能と、
抽出された前記複数の輪郭の中から所定の輪郭状態を持った前記輪郭を選択する機能と、
選択された前記輪郭に基づいて、前記処理対象の画像を前景領域、背景領域、境界領域に区分する機能と、
区分された前記処理対象の画像に対して、前記境界領域に含まれる境界画素を前記前景領域に含まれる前景画素又は前記背景領域に含まれる背景画素のいずれかに確定して、前記境界領域を前記前景領域と前記背景領域とに分割する所定の画像分割処理を行う機能と、
を実現させる、
ことを特徴とする。
In order to solve the above problems, one aspect of the image processing apparatus of the present invention is:
A contour extracting means for extracting a plurality of contours dividing each of the plurality of region candidates included in the processing target image;
Selection means for selecting the contour having a predetermined contour state from among the plurality of contours extracted by the contour extracting unit,
A dividing means for, based on the contour selected by the selecting means, for dividing the image of the processing target foreground area, the background area, in the boundary region,
Against it segmented the processing target image by said dividing means, and determining the boundary pixels included in the boundary region to one of the background pixel contained in the foreground pixel or the background region included in the foreground region, a dividing processing means for performing predetermined image division processing for dividing the boundary area and the foreground area and the background area,
Ru with a,
And wherein a call.
In order to solve the above-mentioned problem, one aspect of the image processing method of the present invention is:
An image processing method in an image processing apparatus,
A process of extracting a plurality of contours dividing each of a plurality of area candidates included in the processing target image;
A process of selecting the contour having a predetermined contour state from the plurality of extracted contours;
A process of dividing the image to be processed into a foreground area, a background area, and a boundary area based on the selected outline;
For the segmented image to be processed, the boundary pixel included in the boundary region is determined as either the foreground pixel included in the foreground region or the background pixel included in the background region, and the boundary region is Processing for performing predetermined image division processing to divide into the foreground region and the background region;
including,
It is characterized by that.
Moreover, in order to solve the said subject, one aspect | mode of the program of this invention is the following.
For the computer of the image processing device,
A function of extracting a plurality of contours dividing each of a plurality of region candidates included in an image to be processed;
A function of selecting the contour having a predetermined contour state from the plurality of extracted contours;
A function of dividing the image to be processed into a foreground region, a background region, and a boundary region based on the selected outline;
For the segmented image to be processed, the boundary pixel included in the boundary region is determined as either the foreground pixel included in the foreground region or the background pixel included in the background region, and the boundary region is A function of performing predetermined image division processing to divide the foreground area and the background area;
To realize,
It is characterized by that.
図2は、画像メモリ3cの中から処理対象として読み出した画像データ(元画像)を例示した図である。
元画像は、例えば、横800ピクセル×縦600ピクセルのサイズを持った画像で、白紙やホワイトボードの略中央部に青インクのペンなどを使用して手書きで描いた絵(青線で描いた鬼の顔)Pを撮影した画像(グレイスケール画像)である。図示の例の元画像は、撮影時に反射などの影響を受けてその画像全体に輝度のバラつき、明るさのむらが起き、画像の下半分が暗く、上半分が明るくなっている画像を例示した場合である。
FIG. 2 is a diagram illustrating image data (original image) read out as a processing target from the image memory 3c.
Original image is, for example, in an image having a size of horizontal 800 pixels × vertical 600 pixels, drawn with drawn picture (blue line handwritten using a pen and blue ink at a substantially central portion of the blank or whiteboard This is an image (grayscale image) obtained by photographing (demon face) P. The original image in the example shown is an example of an image that is affected by reflections during shooting and the entire image has uneven brightness, uneven brightness, the lower half of the image is dark, and the upper half is bright It is.
本実施形態では、図2に示した元画像の中から白紙やホワイトボードに描いた手書き絵Pを切り抜く処理を行うための前処理として、その画像の全体に対して所定のエッジ検出処理を行うようにしている。なお、元画像は、グレイスケール画像に限らず、勿論、カラー画像であってもよい。また、描画媒体として白紙やホワイトボードを例示したが、これに限らず、色紙や黒板などを描画媒体としてもよい。更に、元画像のサイズは、横800ピクセル×縦600ピクセルに限らず、任意である。 In the present embodiment, a predetermined edge detection process is performed on the entire image as a pre-process for performing a process of cutting out a handwritten picture P drawn on a blank sheet or whiteboard from the original image shown in FIG. I am doing so. Note that the original image is not limited to a grayscale image, but may be a color image. Further, although the white paper and the white board are illustrated as the drawing medium, the present invention is not limited thereto, and colored paper, a blackboard, and the like may be used as the drawing medium. Furthermore, the size of the original image is not limited to horizontal 800 pixels × vertical 600 pixels, but is arbitrary.
本実施形態では、輪郭抽出処理として、opencvのFindContoursを使用する。このFindContoursは、輪郭近似手法の関数で、エッジ検出処理が施された2値画像の中から全ての輪郭を抽出するもので、図示の例では、元画像の背景部分は白色で表し、手書き絵Pの部分は、その最も外側の輪郭(最外殻)を赤色、その内側(1番目の内側)の輪郭を緑色、次の内側(2番目の内側)の輪郭を青色、更に内側(3番目の内側)の輪郭を黄色で表現した場合を示している。なお、関数FindContoursによる輪郭近似手法は、画像認識処理として一般的に用いられている技術であり、本実施形態ではその周知技術を利用するようにしているため、その具体的な説明については省略するものとする。 In the present embodiment, as a contour extraction process, using the FindContours of o pencv. This FindContours is a function of the contour approximation method and extracts all contours from the binary image subjected to the edge detection process. In the example shown in the figure, the background portion of the original image is expressed in white, The part P is red for the outermost contour (outer shell), green for the inner (first inner) contour, blue for the next inner (second inner) contour, and further inner (third) The outline of (inside) is expressed in yellow. Note that the contour approximation method using the function FindContours is a technique that is generally used as an image recognition process, and since the known technique is used in the present embodiment, the specific description thereof is omitted. Shall.
このようにして最外殻の輪郭を抽出した後、本実施形態では、この最外殻の輪郭抽出結果から3値マップの画像を作成するようにしている。この3値マップの画像は、明らかに前景ピクセルとして確定することができる前景領域、明らかに背景ピクセルとして確定することができる背景領域、前景ピクセルか背景ピクセルかを確定することができない不確定な境界領域の3つに区分した画像である。これによって得られた3値マップの画像に対して所定の画像分割処理(例えば、Grab Cutのセグメンテーション処理)を実行することによりその画像内から所望する手書き絵Pの領域(線画部分の領域)を切り抜くようにしている。なお、この画像分割処理は、画像全体を前景画素(前景領域)と背景画素(背景領域)に分割する処理であり、Grab Cutと呼ばれるセグメンテーション処理である。この画像分割処理において、前景領域とは、対象となる領域(ユーザの所望する手書き絵Pの領域:切り抜き領域)、また、背景領域とは、前景領域以外の領域を意味している(言い換えると、対象となる領域に対応する輪郭の内部、対象となる領域に対応する輪郭の外部を意味している)。 After extracting the contour of the outermost shell in this way, in this embodiment, an image of a ternary map is created from the contour extraction result of the outermost shell. This ternary map image has foreground areas that can be clearly determined as foreground pixels, background areas that can be clearly determined as background pixels, and indeterminate boundaries that cannot be determined as foreground or background pixels. It is an image divided into three areas. By executing predetermined image segmentation processing (for example, Grab Cut segmentation processing) on the obtained ternary map image, a desired handwritten picture P region (line drawing portion region) is obtained from the image. I'm trying to cut it out. This image division process is a process of dividing the entire image into foreground pixels (foreground areas) and background pixels (background areas), and is a segmentation process called Grab Cut. In this image division processing, the foreground area means a target area (the area of the handwritten picture P desired by the user: a cut-out area), and the background area means an area other than the foreground area (in other words, , Meaning the inside of the contour corresponding to the target region and the outside of the contour corresponding to the target region).
図7は、3値マップ作成処理(図6のステップA4)を詳述するためのフローチャートである。
先ず、初期化処理として、制御部1は、切り抜き対象の画像(輪郭抽出結果)と同じサイズ(例えば、横800ピクセル、縦600ピクセル)の画像を作成し(ステップB1)、この作成画像(同サイズの画像)の全体を、明らかな背景ピクセルを指定する色(背景領域の指定色)で塗りつぶす(ステップB2)。この場合、背景領域の指定色が黒色であれば、画像全体を黒色で塗りつぶす。そして、切り抜き対象の画像(輪郭抽出結果)の中から各々の輪郭で区分される各々の領域の包含関係に基づいて、最も外側にある輪郭(最外殻の輪郭)を全て抽出した後、最外殻の輪郭が複数存在する場合には、最初の最外殻の輪郭を選択する(ステップB3)。例えば、図8(2)に例示した輪郭抽出結果に対して、最も外側にある輪郭としては、例えば、画像の左上隅部の斜め線(鬼の手書き絵以外の余分な輪郭)が最初の最外殻の輪郭として選択される。
FIG. 7 is a flowchart for explaining the ternary map creation process (step A4 in FIG. 6) in detail.
First, as an initialization process, the control unit 1 creates an image having the same size (for example, horizontal 800 pixels, vertical 600 pixels) as the image to be clipped (contour extraction result) (step B1), The entire size image) is filled with a color (designated color of the background region) designating an apparent background pixel (step B2). In this case, if the designated color of the background area is black, the entire image is painted black. Then, after extracting all the outermost contours (outer shell contours) based on the inclusion relation of each region divided by each contour from the image to be clipped (contour extraction result), the outermost contour is extracted. If there are a plurality of outer shell contours, the first outermost shell contour is selected (step B3). For example, with respect to the contour extraction result illustrated in FIG. 8 (2), as the outermost contour, for example, the diagonal line at the upper left corner of the image (the extra contour other than the demon hand-drawn picture) Selected as the outline of the outer shell.
このような初期化処理(ステップB1〜B3)を行った後、全ての最外殻輪郭を選択してそれに対する処理が終わったか否かを調べるが(ステップB4)、いま、最初の最外殻の輪郭を選択した場合であるから(ステップB4でNO)、次のステップB5に移り、選択した最外殻輪郭の長さを計測し、この計測した長さは所定の長さ(比較基準用の長さ)よりも長いか否かを調べる(ステップB6)。すなわち、エッジ検出を基に検出された複数の輪郭の中から所定の輪郭状態を持った輪郭を最外殻の輪郭として抽出するために、最外殻輪郭の長さは、比較基準用の長さよりも長いか否かを調べる。ここで、比較基準用の長さとは、手書き絵の画像部分の中からその最外殻の輪郭を特定するために最適な値で、予め決められた固定値である。例えば、比較基準用の長さは、画像の横サイズに相当する800ピクセルとなっているが、その値はこれに限らないことは勿論である。 After performing such initialization processing (steps B1 to B3), all the outermost shell contours are selected to check whether or not processing has been completed (step B4). Now, the first outermost shell Is selected (NO in step B4), the process proceeds to the next step B5, where the length of the selected outermost shell contour is measured, and this measured length is a predetermined length (for comparison reference). It is checked whether it is longer than the length (step B6). That is, in order to extract a contour having a predetermined contour state from a plurality of contours detected based on edge detection as the contour of the outermost shell, the length of the outermost shell contour is a length for comparison reference. investigate whether long or not than of. Here, the comparison reference length is an optimum value for specifying the contour of the outermost shell from the image portion of the handwritten picture, and is a predetermined fixed value. For example, the length for comparison reference is 800 pixels corresponding to the horizontal size of the image, but the value is not limited to this.
ここで、図8(2)に示したように最初に選択した最外殻の輪郭の長さは、比較基準用の長さ(例えば、800ピクセル)よりも短いので(ステップB6でNO)、ステップB10に移って次の最外殻の輪郭を選択した後、上述のステップB4に戻り、以下、全ての最外殻の輪郭を選択し終わるまで上述の動作を繰り返す。いま、次の最外殻の輪郭として、鬼の手書き絵Pの中からその最外殻の輪郭が選択されたものとする(ステップB10)。この場合、図8(2)の例では、未選択の最外殻輪郭がまだ残っているので(ステップB4でNO)、次のステップB5に移り、選択した最外殻輪郭の長さを計測し、この計測した長さは比較基準用の長さよりも長いか否かを調べる(ステップB6)。この場合、図8(2)に示したように鬼の手書き絵Pにおける最外殻の輪郭は、比較基準用の長さよりも長いので(ステップB6でYES)、3値マップを作成する処理に移る(ステップB7〜B9)。 Here, as shown in FIG. 8 (2), the length of the contour of the outermost shell selected first is shorter than the comparison reference length (for example, 800 pixels) (NO in step B6). After moving to step B10 and selecting the next outermost shell profile, the process returns to the above-mentioned step B4, and the above operation is repeated until all the outermost shell profiles are selected. Now, it is assumed that the contour of the outermost shell is selected from the handwritten picture P of the demon as the contour of the next outermost shell (step B10). In this case, in the example of FIG. 8 (2), since the unselected outermost shell contour still remains (NO in step B4), the process proceeds to the next step B5 to measure the length of the selected outermost shell contour. Then, it is checked whether or not the measured length is longer than the comparison reference length (step B6). In this case, as shown in FIG. 8 (2), the contour of the outermost shell in the demon handwritten picture P is longer than the comparison reference length (YES in step B6). Move (steps B7 to B9).
図9は、図8(2)の輪郭抽出結果に基づいて作成された3値マップの画像を例示した図である。
この場合、選択した最外殻輪郭の内部を塗りつぶす処理(ステップB7)と、その最外殻の輪郭線を描画する処理(ステップB9)の実行時には、opencvのdrawContoursを使用し、また、その輪郭線の太さを10ピクセルのラインとした。なお、図中、破線の円は、便宜上、3値マップ画像の一部を拡大して示したもので、最外殻輪郭の内部が塗りつぶされた白色の前景領域となり、また、太さ10ピクセルの灰色の輪郭線が境界領域となり、その他の領域が黒色の背景領域となり、色分けされた3値マップの画像が得られる。なお、関数drawContoursは、画像認識処理として一般的に用いられている技術であり、本実施形態ではその周知技術を利用するようにしているため、その具体的な説明については省略するものとする。
FIG. 9 is a diagram illustrating a ternary map image created based on the contour extraction result of FIG.
In this case, when executing the process of painting the inside of the selected outermost shell outline (step B7) and the process of drawing the outline of the outermost shell (step B9), opencv drawContours is used, and the outline is also used. The line thickness was 10 pixels. In the drawing, a broken line circle, for convenience, but showing an enlarged portion of the ternary map image becomes a white foreground area whose inside is filled in outermost contour, also, thickness 10 The gray outline of the pixel becomes the boundary region, and the other region becomes the black background region, so that a color-coded ternary map image is obtained. Note that the function drawContours is a technique that is generally used as image recognition processing. Since the known technique is used in the present embodiment, a specific description thereof will be omitted.
このように選択した最外殻輪郭の長さが所定の長さ(比較基準用の長さ)よりも長いか否かを判別する際に、この比較基準用の長さをユーザ操作により任意に調整可能とすれば、ユーザは、3値マップ画像を目視確認しながら比較基準用の長さを調整することができ、どのような切り抜きであっても正確に切り抜くことが可能となる。 When determining whether or not the length of the outermost shell contour thus selected is longer than a predetermined length (length for comparison reference), the length for comparison reference is arbitrarily set by the user operation. if adjustable, the user can adjust the length of the comparative reference while visually checking the ternary map image, it is possible to cut out exactly even throat cutout like.
上述した実施形態においては、比較基準用の長さの中心値のみをユーザ操作で調整するようにしたが、比較基準用の長さの上限値と下限値の両方をユーザ操作で調整するようにしてもよい。この場合、図12のステップC6において、比較基準用の長さの下限値を長くし、ステップC9において、比較基準用の長さの上限値を短くするように変更すればよい。 In the embodiment described above, only the center value of the comparison reference length is adjusted by the user operation. However, both the upper limit value and the lower limit value of the comparison reference length are adjusted by the user operation. May be. In this case, in step C6 in FIG. 12, a longer lower limit of the length of the comparative reference, Oite to step C9, may be changed so as to shorten the upper limit of the length of the comparative reference.
上述した実施形態においては、所定の画像分割処理としてGrab Cutと呼ばれるセグメンテーション手法を使用するようにしたが、その他のセグメンテーション手法であってもよい。更に、上述した実施形態においては、画像の一部の領域を切り抜くようにしたが、切り抜き処理に限らず、一部の領域内を着色する処理などを行うようにしてもよい。 In the embodiment described above has been to use a segmentation technique called Gra b C ut as the predetermined image segmentation, it may be other segmentation techniques. Further, in the above-described embodiment, a partial area of the image is cut out. However, the present invention is not limited to the clipping process, and a process of coloring the partial area may be performed.
Claims (14)
前記輪郭抽出手段によって抽出された前記複数の輪郭の中から所定の輪郭状態を持った前記輪郭を選択する選択手段と、
前記選択手段によって選択された前記輪郭に基づいて、前記処理対象の画像を前景領域、背景領域、境界領域に区分する区分手段と、
前記区分手段によって区分された前記処理対象の画像に対して、前記境界領域に含まれる境界画素を前記前景領域に含まれる前景画素又は前記背景領域に含まれる背景画素のいずれかに確定して、前記境界領域を前記前景領域と前記背景領域とに分割する所定の画像分割処理を行う分割処理手段と、
を備える、
ことを特徴とする画像処理装置。 A contour extracting means for extracting a plurality of contours dividing each of the plurality of region candidates included in the processing target image;
Selection means for selecting the contour having a predetermined contour state from among the plurality of contours extracted by the contour extracting unit,
A dividing means for, based on the contour selected by the selecting means, for dividing the image of the processing target foreground area, the background area, in the boundary region,
Against it segmented the processing target image by said dividing means, and determining the boundary pixels included in the boundary region to one of the background pixel contained in the foreground pixel or the background region included in the foreground region, a dividing processing means for performing predetermined image division processing for dividing the boundary area and the foreground area and the background area,
Ru with a,
The image processing apparatus according to claim and this.
前記選択手段は、前記輪郭抽出手段によって抽出された前記複数の輪郭の中から最外殻の輪郭を選択する、
ことを特徴とする請求項1に記載の画像処理装置。 Said predetermined contour state is defined by the inclusion relationship of the contour,
It said selection means selects the contour of the outermost among the plurality of contours extracted by the contour extracting unit,
The image processing apparatus according to claim 1 .
前記選択手段は、前記複数の輪郭の中から前記輪郭の長さが所定の条件を満たす前記輪郭を選択する、
ことを特徴とする請求項1又は2に記載の画像処理装置。 Said predetermined contour state is defined by the length of the contour,
It said selection means, the length of the contour to select a predetermined condition is satisfied the contour from the plurality of contour,
The image processing apparatus according to claim 1 or 2, characterized in that.
前記選択手段は、前記複数の輪郭の中から前記指定手段によって指定された長さの前記輪郭を選択する、
ことを特徴とする請求項1乃至3のいずれか1項に記載の画像処理装置。 Further comprising designation means for arbitrarily designating the length of the contour by a user operation;
Said selection means selects the contour of the length specified by said specifying means from among the plurality of contours,
The image processing apparatus according to claim 1 , wherein the image processing apparatus is an image processing apparatus.
前記輪郭抽出手段は、前記エッジ検出手段によって検出されたエッジ部分を基に複数の領域候補の各々を区分する前記複数の輪郭を抽出する、
ことを特徴とする請求項1乃至4のいずれか1項に記載の画像処理装置。 Edge detection means for performing predetermined edge detection processing on the image to be processed;
It said contour extracting means extracts a plurality of contour dividing each of the plurality of area candidates on the basis of the detected edge portion by said edge detection means,
The image processing apparatus according to any one of claims 1 to 4, characterized in that.
前記輪郭抽出手段は、前記エッジ検出手段によって検出される画像内の局所的な前記エッジ部分が連結して構成される前記輪郭を複数抽出する
ことを特徴とする請求項5に記載の画像処理装置。 Said edge detecting means detects a local said edge portion in the image,
It said contour extracting means, the image processing apparatus according to claim 5, characterized in that localized the edge portion in the image detected by the edge detecting means is a plurality of extracting the contour formed by connecting .
前記所定の画像分割処理は、前記前景領域に対応する前記領域候補が確定した後、確定した前記前景領域に対応する前記領域候補内の画素情報を用いて、前記前景領域以外の領域を含まずに、前記前景領域のみの前記輪郭が抽出されるように画像を分割する、
ことを特徴とする請求項1乃至6のいずれか1項に記載の画像処理装置。 Said contour extraction means, at the stage before the region candidate corresponding to the foreground area from the plurality of area candidates included in the image of the processing target is determined, the plurality of regions including a region other than the foreground region extracting the plurality of contour partitioning each candidate,
Wherein the predetermined image division processing, after the region candidate corresponding to the foreground area has been determined, using the pixel information of the area candidates corresponding to the foreground area was probability boss, include a region other than the foreground region without, it divides the image so that the contour of only the foreground area is extracted,
The image processing apparatus according to any one of claims 1 to 6, characterized in that.
ことを特徴とする請求項1乃至7のいずれか1項に記載の画像処理装置。 It said section means, the contour selected by the selection means, sets the boundary area by expanding the width corresponding to the uncertainty of whether the foreground area or the background area,
The image processing apparatus according to claim 1 , wherein the image processing apparatus is an image processing apparatus.
前記分割処理手段は、前記区分手段によって生成された前記3値マップの画像を用いて、前記処理対象の画像に対して、前記境界領域に含まれる前記境界画素を前記前景領域に含まれる前記前景画素又は前記背景領域に含まれる前記背景画素のいずれかに確定して、前記境界領域を前記前景領域と前記背景領域とに分割する前記所定の画像分割処理を行う、
ことを特徴とする請求項1乃至8のいずれか1項に記載の画像処理装置。 Said dividing means, based on the contour selected by the selection means, to generate an image of the foreground area, the background area, 3 value map obtained by dividing the boundary area,
The dividing processing means uses the image of the 3 value map generated by the dividing means, the foreground contained to the processing target image, the boundary pixels included in the boundary region in the foreground region and confirm any of the background pixels included in the pixel or the background region, performs the predetermined image division processing for dividing the boundary area and the foreground area and the background area,
The image processing apparatus according to claim 1 , wherein the image processing apparatus is an image processing apparatus.
ことを特徴とする請求項9に記載の画像処理装置。 Wherein by said division means, to draw the interior of the contour selected by the selecting means together with the filling in a specified color of said foreground area, the contour selected by the selecting means with the specified color of the boundary region 3 Generate a value map image,
The image processing apparatus according to claim 9 .
ことを特徴とする請求項1乃至請求項10のいずれか1項に記載の画像処理装置。 Wherein the predetermined image division processing, for the pixels in the boundary region of either foreground or background undetermined, total value obtained by aggregating a predetermined viewpoint on portions of the image differences of the luminance in adjacent pixels are smaller a process of determining whether the foreground or the background so that,
The image processing apparatus according to any one of claims 1 to 10, characterized in that.
ことを特徴とする請求項1乃至請求項11のいずれか1項に記載の画像処理装置。 The predetermined image division process is a process in which the area division processing speed is slower than the predetermined edge detection process but the area division accuracy is high.
The image processing apparatus according to any one of claims 1 to 11, characterized in that.
処理対象の画像に含まれる複数の領域候補の各々を区分する複数の輪郭を抽出する処理と、
抽出された前記複数の輪郭の中から所定の輪郭状態を持った前記輪郭を選択する処理と、
選択された前記輪郭に基づいて、前記処理対象の画像を前景領域、背景領域、境界領域に区分する処理と、
区分された前記処理対象の画像に対して、前記境界領域に含まれる境界画素を前記前景領域に含まれる前景画素又は前記背景領域に含まれる背景画素のいずれかに確定して、前記境界領域を前記前景領域と前記背景領域とに分割する所定の画像分割処理を行う処理と、
を含む、
ことを特徴とする画像処理方法。 An image processing method in an image processing apparatus,
A process of extracting a plurality of contours dividing each of a plurality of area candidates included in the processing target image;
A process of selecting the contour having a predetermined contour state from the plurality of contour issued extracted,
Based on selected by said contour, a process of dividing the image of the processing target foreground area, the background area, in the boundary region,
Against Ku minute it has been the processing target image, and determining the boundary pixels included in the boundary region to one of the background pixel contained in the foreground pixel or the background region included in the foreground region, the boundary region a process of performing a predetermined image division processing for dividing the said foreground area and the background area,
Including,
An image processing method.
処理対象の画像に含まれる複数の領域候補の各々を区分する複数の輪郭を抽出する機能と、
抽出された前記複数の輪郭の中から所定の輪郭状態を持った前記輪郭を選択する機能と、
選択された前記輪郭に基づいて、前記処理対象の画像を前景領域、背景領域、境界領域に区分する機能と、
区分された前記処理対象の画像に対して、前記境界領域に含まれる境界画素を前記前景領域に含まれる前景画素又は前記背景領域に含まれる背景画素のいずれかに確定して、前記境界領域を前記前景領域と前記背景領域とに分割する所定の画像分割処理を行う機能と、
を実現させる、
ことを特徴とするプログラム。 For the computer of the image processing device,
A function of extracting a plurality of contours dividing each of a plurality of region candidates included in an image to be processed;
A function for selecting the contour having a predetermined contour state from the plurality of contour issued extracted,
Based on selected by said contour, a function of dividing the image of the processing target foreground area, the background area, in the boundary region,
Against Ku minute it has been the processing target image, and determining the boundary pixels included in the boundary region to one of the background pixel contained in the foreground pixel or the background region included in the foreground region, the boundary region a function of performing a predetermined image division processing for dividing the said foreground area and the background area,
To realize ,
A program characterized by that .
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014262659A JP2016122367A (en) | 2014-12-25 | 2014-12-25 | Image processor, image processing method and program |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014262659A JP2016122367A (en) | 2014-12-25 | 2014-12-25 | Image processor, image processing method and program |
Publications (2)
Publication Number | Publication Date |
---|---|
JP2016122367A JP2016122367A (en) | 2016-07-07 |
JP2016122367A5 true JP2016122367A5 (en) | 2018-01-25 |
Family
ID=56327460
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2014262659A Pending JP2016122367A (en) | 2014-12-25 | 2014-12-25 | Image processor, image processing method and program |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP2016122367A (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108890154B (en) * | 2018-08-16 | 2020-06-02 | 威海先临三维科技有限公司 | Special-shaped crystal laser inner carving method |
US11195283B2 (en) | 2019-07-15 | 2021-12-07 | Google Llc | Video background substraction using depth |
CN114018946B (en) * | 2021-10-20 | 2023-02-03 | 武汉理工大学 | OpenCV-based high-reflectivity bottle cap defect detection method |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10320566A (en) * | 1997-05-19 | 1998-12-04 | Canon Inc | Picture processor, picture processing method, and storage medium storing the same method |
JP4800641B2 (en) * | 2005-03-11 | 2011-10-26 | オリンパス株式会社 | Micromanipulator system, program, and specimen manipulation method |
JP2008212017A (en) * | 2007-03-01 | 2008-09-18 | Nikon Corp | Apparatus for determining cell state, and method for determining cell state |
JP2009276294A (en) * | 2008-05-16 | 2009-11-26 | Toshiba Corp | Image processing method |
JP2010205067A (en) * | 2009-03-04 | 2010-09-16 | Fujifilm Corp | Device, method and program for extracting area |
US8855411B2 (en) * | 2011-05-16 | 2014-10-07 | Microsoft Corporation | Opacity measurement using a global pixel set |
JP2013029930A (en) * | 2011-07-27 | 2013-02-07 | Univ Of Tokyo | Image processing device |
JP5872401B2 (en) * | 2012-07-10 | 2016-03-01 | セコム株式会社 | Region dividing device |
-
2014
- 2014-12-25 JP JP2014262659A patent/JP2016122367A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112348815B (en) | Image processing method, image processing apparatus, and non-transitory storage medium | |
CN110248085B (en) | Apparatus and method for object boundary stabilization in images of an image sequence | |
US11282185B2 (en) | Information processing device, information processing method, and storage medium | |
US8498482B2 (en) | Image segmentation | |
JP6089886B2 (en) | Region dividing method and inspection apparatus | |
KR102208683B1 (en) | Character recognition method and apparatus thereof | |
US10339657B2 (en) | Character detection apparatus and method | |
KR101611895B1 (en) | Apparatus and Method of Automatic Text Design based on Emotion | |
JP5811416B2 (en) | Image processing apparatus, image processing method, and program | |
CN107622504B (en) | Method and device for processing pictures | |
AU2011250827B2 (en) | Image processing apparatus, image processing method, and program | |
US9633449B2 (en) | Apparatus and method for detecting color checker in image | |
US9667880B2 (en) | Activating flash for capturing images with text | |
CN112053367A (en) | Image processing method, apparatus and storage medium | |
JP2016122367A5 (en) | ||
JP2016122367A (en) | Image processor, image processing method and program | |
RU2458396C1 (en) | Method of editing static digital composite images, including images of several objects | |
KR101592087B1 (en) | Method for generating saliency map based background location and medium for recording the same | |
KR101651842B1 (en) | Method and device for generating layout of electronic document | |
KR101189003B1 (en) | Method for converting image file of cartoon contents to image file for mobile | |
EP3029605A2 (en) | Marker recognition device, marker recognition method, and recognition program | |
EP3038058A1 (en) | Methods and systems for color processing of digital images | |
US20180189589A1 (en) | Image processing device, image processing method, and program | |
EP4064217A1 (en) | Extracting region of interest from scanned images and determining an associated image type thereof | |
US20230215050A1 (en) | Information processing apparatus, method, and storage medium to generate a color-difference map |