WO2012005242A1 - Image processing device and image segmenting method - Google Patents

Image processing device and image segmenting method Download PDF

Info

Publication number
WO2012005242A1
WO2012005242A1 PCT/JP2011/065356 JP2011065356W WO2012005242A1 WO 2012005242 A1 WO2012005242 A1 WO 2012005242A1 JP 2011065356 W JP2011065356 W JP 2011065356W WO 2012005242 A1 WO2012005242 A1 WO 2012005242A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
cluster
pixels
adjacent
Prior art date
Application number
PCT/JP2011/065356
Other languages
French (fr)
Japanese (ja)
Inventor
小川 雅嗣
雄馬 松田
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2012523873A priority Critical patent/JPWO2012005242A1/en
Publication of WO2012005242A1 publication Critical patent/WO2012005242A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Definitions

  • the present invention is based on the priority claim of Japanese patent application: Japanese Patent Application No. 2010-152869 (filed on July 5, 2010), the entire contents of which are incorporated herein by reference. Shall.
  • the present invention relates to an image processing apparatus and an image dividing method.
  • the present invention relates to an image processing apparatus and an image dividing method for extracting an object from an image whose content of the object being shown is not known.
  • the present technology is for determining whether or not two images are the same, and does not function unless an object to be recognized is prepared in advance. That is, there is a problem that when an image that is not prepared in advance in the database is displayed in the image, it cannot be recognized.
  • Non-Patent Document 2 Recognition using a representation method called CSS (Curvure Scale Space), which is shown in Non-Patent Document 2, which smoothes the image contour step by step and represents the inflection point position of each contour, is known. It has been. By using this technology, it is possible to recognize that the same image with geometric transformation or an image with similar outline is the same or similar. However, the present technology is based on the assumption that the contour of the object has been extracted.
  • CSS Cosmetic Scale Space
  • edge extraction This is a technique for extracting a position where a change in luminance, color, or the like between pixels is large as an edge and estimating that it is an outline of an object.
  • the Canny method is known as one of the excellent edge extraction methods.
  • the edge is a quantity that correlates with the contour of the object, it seems a reasonable method, but this method has some drawbacks.
  • the first problem of edge extraction is setting a threshold value of change to be regarded as an edge.
  • edge extraction is that edges are generated or lost due to subtle adjustments of light, so there are no edges in the middle of the contour, or error edges are not related to the contour of the object. Is to appear. Therefore, it is necessary to consider an algorithm for estimating a broken outline and an algorithm for ignoring error edges. This is also a very big problem.
  • the third problem of edge extraction is that information about the presence of an object is lost after edge extraction.
  • the image after the edge extraction is an edge-only image, and information regarding which region is the same as the object is lost. It is very difficult to specify an object by relying only on edges. Furthermore, as described above, since it must be performed in a state in which broken contours frequently occur, it is even more difficult. As described above, edge extraction is often used as a simple method, but it can be said that it has a great number of problems.
  • segmentation uses a certain feature amount to cluster pixels according to a threshold value set for the feature amount.
  • a cluster refers to a set of pixels. If this technology is used, it seems that it is possible to divide the individual objects shown in the image into clusters, but since the generated clusters largely depend on the threshold setting, satisfactory results can be obtained depending on the threshold setting. There is no current situation. However, since the position where the object is likely to be expressed is expressed as a cluster, it has an advantage that the position where the object is likely to be specified is easier than the edge extraction.
  • segmentation there are also problems with color. Basically, there are a lot of techniques for segmentation with black and white shading, but if the image is multicolored, it is very difficult to cluster. This is because, in the case of a black and white image, there is a one-dimensional index called shading, so clustering is possible if the density is histogrammed and the threshold is set to a value that allows the peaks of the histogram to be divided into two and three. In the case of multiple colors, it is unclear how to create a histogram. Therefore, it is more difficult to provide effective segmentation for multicolor images than for black and white.
  • Patent Document 1 discloses a method for performing image template matching.
  • this method after applying a DOG (Difference Of Gaussian) filter to the image, two thresholds th1 and th2 are set, and when the DOG output is smaller than th2, it is set to -1, and when th2 to th1 is set to 0.
  • the pattern matching is performed after the process of ternarizing to +1 when the value is larger than th1.
  • the processing result changes depending on the settings of the thresholds th1 and th2.
  • Patent Document 2 discloses a processing method for giving a rough instruction to a subject area to be extracted from an image and extracting the subject.
  • an initial nucleus is set from the designated area, and a representative color of the initial nucleus is extracted. Then, while updating the region growth threshold, the contour line of the subject is extracted and the internal region image is cut out.
  • the processing result varies depending on the setting of the reference for the representative color and the threshold value for area growth.
  • Patent Document 3 discloses a method for performing region division based on a plurality of feature amounts obtained from image data.
  • this method for example, luminance color information and texture information are obtained as a plurality of feature amounts, and region integration processing is performed for each combination of region division results.
  • a process of integration of a minute area is performed.
  • a certain threshold value is set, and when the threshold value is smaller than the threshold value, it is determined as a minute area and integrated into another area. Processing is in progress.
  • the processing result changes depending on the threshold value used to determine the minute region.
  • An object of the present invention is to make it possible to robustly extract an object from image data by providing an image dividing method that does not require a threshold setting.
  • an image division processing unit that divides an image into clusters that are sets of pixels, and the image division processing unit uses each pixel included in the image as a pixel of interest, and An image processing apparatus is provided that divides an image by making the same cluster as a pixel having a feature amount closest to the pixel of interest among pixels adjacent to the pixel of interest.
  • an image dividing method for dividing an image into clusters that are a set of pixels, wherein one of the pixels included in the image is a pixel of interest and is adjacent to the pixel of interest.
  • a neighboring pixel selection step that selects a pixel having a feature quantity closest to the target pixel, and a clustering between adjacent pixels in which the target pixel is the same cluster as the pixel selected in the adjacent pixel selection step
  • an adjacent pixel selection step and the same clustering step between adjacent pixels are repeated using each pixel included in the image as the pixel of interest.
  • the first aspect of the present invention it is possible to provide an image processing apparatus capable of robustly extracting an object from image data.
  • the reason is that the image is divided by making the pixel of interest the same cluster as the pixel having the closest feature quantity among the adjacent pixels, and the threshold processing is not performed, so that the image division processing can be performed robustly. Because it became.
  • the second aspect of the present invention it is possible to provide an image dividing method capable of robustly extracting an object from image data.
  • the reason is that the pixel with the closest feature quantity is selected from the pixels adjacent to the target pixel, and the image is divided so that the cluster of the target pixel is the same cluster as the selected pixel, and threshold processing is performed. This is because the image division processing can be performed robustly.
  • FIG. 1 is a block diagram of an entire image processing apparatus according to a first embodiment of the present invention. It is a figure explaining the principle of the image division process of this invention. It is a flowchart which shows the image division method of this invention. It is a figure explaining the image processing result which implemented this invention. It is an example of the image which implemented this invention in Example 1.
  • FIG. It is a flowchart for demonstrating Example 1 of this invention. It is a figure explaining the image processing result of Example 1 of this invention. It is a figure explaining the image processing result of Example 1 of this invention. It is a figure explaining the image processing result of Example 1 of this invention. It is a figure explaining the image processing result of Example 1 of this invention. It is an example of the image implemented in Example 1 of this invention. It is a figure explaining the image processing result of Example 1 of this invention.
  • Example 2 of this invention It is a block diagram of the whole image processing apparatus of Example 2 of this invention. It is an example of the image which implemented this invention in Example 2.
  • FIG. It is a flowchart for demonstrating Example 2 of this invention. It is a figure explaining the image processing result of Example 2 of this invention.
  • the image division processing unit divides an image into clusters that are sets of pixels, and the image division processing unit uses each pixel included in the image as a target pixel and is adjacent to the target pixel.
  • an image processing apparatus is provided that divides an image by using the same cluster as a pixel having a feature amount closest to the pixel of interest.
  • threshold processing is not performed, so that the object from the image data It is possible to provide an image processing apparatus that can extract the image robustly.
  • the feature amount is preferably color information.
  • the color information is a vector in which the three components are 2R-GB, 2G-RB, and 2B-RG. Is preferably represented by: [Form 4] When the components in the RGB color system of the image are R, G, and B, and ⁇ is a certain coefficient, the feature amount includes two components: 2R-GB, 2G-RB, 2B- It is preferably represented by a vector consisting of RG and ⁇ (R + G + B). [Form 5] It is preferable to further include a contour extracting unit that extracts a boundary between clusters formed by the image division processing unit.
  • An image division method for dividing an image into clusters which are a set of pixels, as in the second viewpoint, wherein one pixel among the pixels included in the image is a pixel of interest, and is adjacent to the pixel of interest.
  • An adjacent pixel selecting step for selecting a pixel whose feature amount is closest to the target pixel; and an adjacent pixel same clustering step in which the target pixel is the same cluster as the pixel selected in the adjacent pixel selection step.
  • the feature amount is preferably color information.
  • the color information is a vector in which the three components are 2R-GB, 2G-RB, and 2B-RG.
  • the feature amount includes two components: 2R-GB, 2G-RB, 2B- It is preferably represented by a vector consisting of RG and ⁇ (R + G + B).
  • the method further includes a step of extracting a boundary of the cluster.
  • the image processing apparatus 8 includes an image division processing unit 2 that divides an image into clusters that are sets of pixels. The image is divided by setting each pixel included in the pixel as the target pixel to be the same cluster as the pixel having the feature amount closest to the target pixel among the pixels adjacent to the target pixel.
  • the image division method according to the second embodiment of the present invention is an image division method for dividing an image into clusters, which are a set of pixels, as shown in FIG.
  • an adjacent pixel selection step S13 for selecting a pixel whose feature quantity is closest to the target pixel, and the same cluster as the pixel selected as the target pixel in the adjacent pixel selection step S13
  • the adjacent pixel same clustering step S14 is repeated, and the adjacent pixel selection step and the adjacent pixel same clustering step are repeated with each pixel included in the image as the pixel of interest.
  • any feature value can be applied as long as it represents the nature of the image, but color information is considered as an optimal example of the feature value.
  • the reason is that the brightness information of the captured image changes depending on the shooting conditions such as the amount of light hitting the subject, shutter speed, exposure, sensor sensitivity, etc., but the color information is not easily affected by the same. This is because the parameter is suitable for robust subject extraction because it tends to maintain the state. However, since the color information alone may not be able to distinguish between subjects having the same color information and different reflectivities, it is desirable to appropriately add brightness information.
  • the RGB method expresses colors with the output of the three primary colors of light. This representation is very convenient as a color representation because it directly corresponds to the output of the device. It is also close to the structure of a human color vision detector.
  • the RGB method has a drawback that it does not know which hue, saturation, or brightness corresponds to the difference between colors.
  • the HSV method solves that problem.
  • a color is expressed by three parameters of hue H, saturation S, and brightness V.
  • Conversion from the RGB system to the HSV system is expressed by the following equations (1), (2), and (3).
  • Equation (4) represents only color information
  • Expression (5) adds a fourth component representing brightness information to Expression (4).
  • ⁇ in Equation (5) is a coefficient. If only color information is used, equation (4) may be used, but if information about brightness information is also considered, equation (5) may be used. Further, the weighting of the color information and the brightness information may be adjusted by the coefficient ⁇ . If ⁇ is set to a large value, the contribution rate of brightness information increases, and if ⁇ is set to a small value, the contribution rate of brightness information decreases.
  • FIG. 1 is a block diagram of the entire image processing apparatus according to the first embodiment of the present invention.
  • the image acquisition unit 1 acquires an image.
  • the image may be acquired by the image processing apparatus with a built-in CCD, or an image stored outside may be taken out and supplied to the image acquisition unit 1.
  • the image acquired by the image acquisition unit 1 is input to the image division processing unit 2 and is divided into clusters by the image division processing of the image division processing unit 2.
  • the result of the division into clusters by the image division processing unit 2 is input to the contour extraction unit 3, and the contours of the clusters are extracted.
  • the contour of the cluster is input to the contour recognition unit 4.
  • the contour recognition unit 4 recognizes an object from the contour data obtained by the contour extraction unit 3 by referring to data of the contour data storage unit 5 in which the relationship between many contours and object information is stored in advance. Output the result.
  • step S21 an image is acquired.
  • step S22 it is determined whether to process between pixels or between clusters.
  • step S22 the image is divided by the same clustering between adjacent pixels in step S23.
  • step S28 the adjacent cluster in step S28 is determined.
  • the image is divided by the same clustering.
  • step S24 the contour extraction processing of the contour extraction unit 3 is performed on the clustered result.
  • step S25 the contour recognition process of the contour recognition unit 4 is performed.
  • step S26 it is determined whether or not the recognition result is satisfactory. If it is determined as No, in step S29, the cluster whose contour has been recognized is removed from the processing target, and the process returns to step S22 and the remaining processing. The processes after step S22 are repeated for the target cluster. On the other hand, if it is determined Yes, the process is terminated.
  • step S22 generally, the process between pixels is selected in the first iteration process, and the process between clusters is selected in the second and subsequent iteration processes.
  • FIG. 3 is a flowchart showing details of image division by the same clustering between adjacent pixels performed in step S23 of FIG.
  • a pixel to be processed is selected.
  • a pixel to be processed is referred to as a target pixel.
  • step S12 in FIG. 3 the color difference between adjacent pixels is calculated.
  • step S12 the color vector represented by Expression (5) is used as the feature amount used for clustering of the image division processing unit 2.
  • Color vectors are calculated for the target pixel shown in FIG. 2 and the eight pixels adjacent to the target pixel.
  • the color vector of the target pixel is EV0
  • the color vectors of the eight pixels adjacent to the target pixel are EV1, EV2, EV3, EV4, EV5, EV6, EV7, and EV8, respectively.
  • ⁇ EVi is also a four-dimensional vector.
  • the norm ⁇ EVi_NORM of the color difference ⁇ EVi is further calculated.
  • the norm is the magnitude of the vector and the square sum of the squares of the four components.
  • Step S13 is an adjacent pixel selection step.
  • step S13 first, among the eight norms ⁇ EVi_NORM calculated in step S12, the direction having the smallest value is searched for i_min. Thereby, the pixel of interest is considered to be continuous with the adjacent pixel in the i_min direction.
  • i_min 1
  • the target pixel is considered to be continuous with the right pixel
  • i_min 2
  • the target pixel is continuous with the upper right pixel.
  • i_min 3
  • I_min 5
  • i_min 6
  • i_min 7
  • the target pixel is considered to be continuous with the lower pixel.
  • i_min 8
  • the target pixel is considered to be continuous with the lower right pixel.
  • Step S14 is the same clustering step between adjacent pixels.
  • the cluster of the pixel of interest is replaced with the same cluster as the pixel in the direction i_min of the adjacent pixel having the smallest color difference obtained in step S13. If a plurality of pixels have the smallest color difference, all of them may be pixels belonging to the same cluster, or one of them may be selected.
  • step S15 in FIG. 3 it is determined whether or not processing has been performed for all pixels. If NO, the process returns to step S11 to repeat the processing. On the other hand, when it determines with YES, a process is complete
  • FIG. 4 shows an example of the result of image division processing by the method shown in the flowchart of FIG.
  • the numbers described in each pixel are the cluster numbers, and the same clusters have the same numbers.
  • the area of the object B in FIG. 4 must be the same cluster, but is divided into a cluster 3 and a cluster 4. That is, the same clustering is insufficient between the cluster 3 and the cluster 4. This cause occurs when a pixel of a certain cluster selects a pixel of the same cluster as the other party.
  • clustering between the clusters may be performed on the remaining processing target cluster. .
  • step S22 the process shown in FIG. 4 is selected by selecting a process between pixels in the first process and selecting a process between clusters in the second process. Has solved.
  • step S13 in FIG. 3 is an adjacent cluster selection step
  • step S14 is an identical clustering step between adjacent clusters.
  • the number of adjacent clusters in step S12 in FIG. 3 is not limited to 8, and is the number of adjacent clusters of interest.
  • One of the effective points of the present invention is that no threshold is used when dividing into clusters. Since each pixel and each cluster is only combined with the closest pixel difference among adjacent pixels or clusters, the relative relationship does not change even if the light intensity changes. In other words, even if the light intensity changes, it will be clustered in the same way. This is an effective feature not found in the conventional segmentation technique, and it is not necessary to change the threshold many times as in the conventional case. That is, it can be said that it is a very robust method for adjusting light.
  • the image in FIG. 5 is composed of three areas of a round object D, a triangular object E, and a background, and each of the three areas has different color information and brightness information.
  • image division processing is performed on all the pixels of the image in FIG. 5 by the image division processing unit 2, the images are clustered by color.
  • FIG. 7 shows a clustered state.
  • the numbers described in the pixels are the cluster numbers, and the same cluster has the same number.
  • the region of the triangular object E has the same color, clusters having different cluster numbers exist.
  • the contour extraction unit 3 extracts a boundary portion of each cluster clustered by the image division processing unit 2 as a contour.
  • the extracted contour information is sent to the contour recognition unit 4.
  • the contour recognizing unit 4 matches the recognition contour database stored in the contour data storing unit 5 with the contour information extracted by the contour extracting unit 3 to specify an object.
  • the recognition of the object is once completed, but there may still be a cluster that could not be identified as an object.
  • a circular outline is extracted for the cluster 2, and a round object is recognized by the outline recognition unit 4.
  • the contour recognition unit 4 removes the recognized cluster 2 from the processing target, and returns to the clustering process again with the remaining cluster as the processing target.
  • clustering between clusters is selected, and clusters are joined by clustering between clusters.
  • FIG. 8 shows the result of the connection between the clustering. It can be seen that clusters of the same color are combined into one cluster. Subsequent processing is the same as that for pixels, and the contour extracting unit and contour recognizing unit sequentially process to identify the object. Now it can also recognize triangular objects.
  • FIG. 9 shows the result of contour extraction. Finally, it can be seen that round objects and triangular objects are recognized and specified. In this way, according to the present invention, the image is autonomously divided without requiring the setting of a threshold value, and then the object can be specified very simply.
  • FIG. 10 is an image of the same subject as in FIG. 5 taken in the dark.
  • FIG. 11 shows the result when the image division processing unit 2 processes the image of FIG. It can be seen that an object in the dark can be easily recognized. This is because the present invention does not require threshold setting and can operate robustly regardless of light. From the above, it has been found that the present invention is very simple and can robustly specify an object, extract a contour, and recognize an object.
  • FIG. 12 is a block diagram of the entire image processing according to the second embodiment of the present invention.
  • the second embodiment further includes a cluster combination unit 6 and a cluster combination data storage unit 7 as compared with the first example.
  • the cluster combination unit 6 is arranged between the image division processing unit 2 and the contour extraction unit 3, and the clustering result by the image division processing unit 2 is based on the data stored in the cluster combination data storage unit 7. It is configured to perform a join process.
  • FIG. 14 is a flowchart for explaining the second embodiment of the present invention.
  • step S27 is added to FIG. 6 showing the operation of the first embodiment.
  • step S27 the clustering process is performed on the result of the clustering process in steps S23 and S28 by using the coupling relationship between the clusters.
  • the final object identification and contour extraction are performed by combining clusters having different colors using the connection relation and inclusion relation between clusters.
  • the contour can be extracted by taking out the periphery of the combined cluster.
  • the extracted contour is matched with a contour dictionary held by the image processing apparatus to recognize what the object is.
  • the connection and inclusion between the clusters is somewhat a combination problem.
  • information as an object lump is stored, so the processing is not complicated. This method is simpler than the conventional method of specifying an object from an edge where information as a mass of the object is not stored, or specifying an object while performing segmentation many times by changing a threshold value.
  • an object can be specified and the contour of the object can be extracted very simply and effectively.
  • FIG. 13 shows a humanoid object, which is an image composed of seven areas: face, arm (right), arm (left), T-shirt, pants (right), pants (left), and background.
  • the arm (left) and the arm (right) have the same color information and brightness information
  • the pants (left) and pants (right) have the same color information and brightness information.
  • FIG. 13 is an image to be executed in the second embodiment.
  • step S23 clustering is performed.
  • the face is in cluster 2
  • the arm (right) is in cluster 3
  • the arm (left) is in cluster 4
  • the T-shirt is in cluster 5
  • the pants (right) Are clustered in cluster 6, pants (left) in cluster 7, and background in cluster 1.
  • step S27 the processing of step S27 is performed by the cluster combining unit 6, and as shown in FIG. 15, the six clusters 2, cluster 3, cluster 4, cluster 5, cluster 6, and cluster 7 are integrated into cluster 2 It is summarized in.
  • the clusters are combined using not only the physical quantity difference such as the color difference but also the connection relationship or inclusion relationship between the clusters. For example, if an oblong object is connected to a rectangular object, and a thin rectangular object extends from the oblong object, the torso is connected to the face and the limbs protrude from the torso. Therefore, these regions are determined to be the same and combined.
  • the clusters are combined.
  • the characteristics of various objects are stored in advance in the cluster connection data storage unit 7 and acquired from there.
  • the clusters are combined in this way, if the respective parts are the same object even if they are formed with different colors and physical quantities, the contour of the object can be extracted.
  • contour extraction is performed from the combined clusters. From the processing result shown in FIG. 15, it can be seen that the boundary of the combined cluster 2 has been subjected to contour extraction.
  • step S25 contour recognition is performed.
  • the contour extraction result is referred to the database of the contour data storage unit, matching processing is performed, and a recognition result that the extracted contour is a human is output. From the above, it has been found that the present invention is also effective for recognizing an object composed of several color segments. Further, in the second embodiment, similarly to the first embodiment, it is possible to recognize the robustness of the light.
  • clustering is mainly performed using color information, but a texture or the like may be used as another feature amount.
  • the recognition is performed using a simple image.
  • the present invention is also effective for an object having a more complicated shape or an image including many objects.
  • An image processing apparatus characterized by dividing the image.
  • Each cluster to be processed is a target cluster, and the target cluster is the same cluster as the cluster closest to the cluster among the clusters adjacent to the target cluster, thereby combining the clusters.
  • An image dividing method for dividing an image into clusters which are sets of pixels, An adjacent pixel selection step of selecting a pixel having a feature quantity closest to the target pixel, among pixels adjacent to the target pixel, using one of the pixels included in the image as the target pixel; Including the same clustering step between adjacent pixels in which the pixel of interest is the same cluster as the pixel selected in the adjacent pixel selection step, and the adjacent pixel selection step and the same clustering step between the adjacent pixels.
  • An image dividing method, wherein each pixel included in the image is repeated as the pixel of interest.
  • the image segmentation method of the present invention makes it possible to recognize a subject appearing in an image captured by a digital video device, so that the captured image data can be classified for each subject or a subject in the image database can be identified. It can be applied to searching for a scene.

Abstract

The disclosed image processing device has an image segmenting processing unit for segmenting an image into clusters, i.e., sets of pixels. The image segmenting processing unit segments an image by making each pixel in the image the target pixel and setting to the same cluster those pixels that are adjacent to the target pixel and that have a feature value closest to that of the target pixel. Because no threshold value processing is performed, the disclosed image processing device is capable of robustly extracting objects from the image data.

Description

画像処理装置及び画像分割方法Image processing apparatus and image dividing method
[関連出願の記載]
 本発明は、日本国特許出願:特願2010-152869号(2010年7月5日出願)の優先権主張に基づくものであり、同出願の全記載内容は引用をもって本書に組み込み記載されているものとする。
 本発明は、画像処理装置及び画像分割方法に関する。特に、映っている物体の内容がわかっていない画像から、物体を抽出するための画像処理装置及び画像分割方法に関する。
[Description of related applications]
The present invention is based on the priority claim of Japanese patent application: Japanese Patent Application No. 2010-152869 (filed on July 5, 2010), the entire contents of which are incorporated herein by reference. Shall.
The present invention relates to an image processing apparatus and an image dividing method. In particular, the present invention relates to an image processing apparatus and an image dividing method for extracting an object from an image whose content of the object being shown is not known.
 近年、デジタルカメラを始めとするデジタル映像機器の急速な普及に伴い、撮影された画像や映像のなかに、どのような物体が含まれているのかを認識する一般物体認識への期待が高まっている。一般物体認識は、データベース内に分類されずに格納されている画像データの適切な分類や、必要な画像データの検索、さらには動画像の中からの所望のシーンの抽出や、所望のシーンだけを切り取る再編集など、様々な用途に応用できる可能性を有している。 In recent years, with the rapid spread of digital video equipment such as digital cameras, there is an increasing expectation for general object recognition that recognizes what objects are included in captured images and videos. Yes. In general object recognition, appropriate classification of image data stored without being classified in the database, retrieval of necessary image data, extraction of a desired scene from a moving image, or only a desired scene It can be applied to various applications such as re-editing.
 物体認識に関する技術は、顔認識や指紋認識など様々な認識技術がこれまでに開発されてきたが、これらはすべて特殊な用途に限定されていた。ある一つの対象に特化した認識技術は、別の用途に利用しようとした際に、途端に認識率が下がるなどの問題が指摘されており、一般的な物体の認識を行う技術の開発が期待されている。一般物体への認識技術として、非特許文献1に示される、画像の局所的な強度勾配を集積したヒストグラムを用いたSIFT(Scale Invariant Feature Transform)という特徴量が知られている。本技術を利用することによって、幾何変換や遮蔽を伴う同一画像を、同一であると認識することが可能である。しかしながら、本技術は二つの画像が同一であるか否かを判断するためのものであり、認識したいものが予め用意されていないと機能しない。つまり、画像の中に予めデータベースに用意されていない物が映っていた場合、認識することはできないという課題を有している。 The various object recognition technologies such as face recognition and fingerprint recognition have been developed so far, but all of them have been limited to special applications. Development of technology that recognizes general objects has been pointed out that recognition technology specialized for one object has been pointed out, such as the recognition rate going down immediately when trying to use it for another purpose. Is expected. As a technique for recognizing a general object, there is known a feature amount called SIFT (Scale Invariant Feature Transform) using a histogram in which local intensity gradients of images are accumulated as shown in Non-Patent Document 1. By using this technique, it is possible to recognize that the same image with geometric transformation and occlusion is the same. However, the present technology is for determining whether or not two images are the same, and does not function unless an object to be recognized is prepared in advance. That is, there is a problem that when an image that is not prepared in advance in the database is displayed in the image, it cannot be recognized.
 その他に、非特許文献2に示される、画像輪郭を段階的に平滑化し、それぞれでの輪郭の変曲点の位置を用いて表現したCSS(Curvature Scale Space)という表現方法を用いた認識が知られている。本技術を利用することによって、幾何変換を伴う同一画像や、輪郭が類似する画像を、同一である、または類似すると認識することが可能である。しかしながら、本技術は、物体の輪郭が抽出されていることが前提となっている。 In addition, recognition using a representation method called CSS (Curvure Scale Space), which is shown in Non-Patent Document 2, which smoothes the image contour step by step and represents the inflection point position of each contour, is known. It has been. By using this technology, it is possible to recognize that the same image with geometric transformation or an image with similar outline is the same or similar. However, the present technology is based on the assumption that the contour of the object has been extracted.
 一方、画像中に何が映っているかわからないときに、物体と思われるものの輪郭を抽出することは非常に難しい。以下に、代表的な物体の輪郭抽出に関わる技術を説明する。最も一般的な輪郭抽出技術は、エッジ抽出である。これは、画素間の輝度、色などの変化が大きい位置をエッジとして抽出し、それが物体の輪郭であろうと推定する技術である。現在、優秀なエッジ抽出法の一つとして、キャニー法が知られている。エッジは、物体の輪郭と相関している量なので、合理的な方法と思われるが、この方法には幾つか欠点がある。エッジ抽出の第1の問題は、エッジと捕らえる変化の閾値の設定である。この閾値の設定次第で、抽出されるエッジの数は非常に大きく影響を受ける。したがって、実際にエッジ抽出を行なう場合、閾値を振って、適当と思われる閾値を探さなくてはいけない。これは非常に大きな計算負荷であり、エッジ抽出の問題である。画像の場合、同一被写体であっても、撮像された画像により、光の加減が変わることは当たり前なので、閾値を固定化すると、エッジ抽出の性能に問題が生じる。 On the other hand, it is very difficult to extract the outline of what seems to be an object when it is not clear what is reflected in the image. A technique related to contour extraction of a typical object will be described below. The most common contour extraction technique is edge extraction. This is a technique for extracting a position where a change in luminance, color, or the like between pixels is large as an edge and estimating that it is an outline of an object. At present, the Canny method is known as one of the excellent edge extraction methods. Although the edge is a quantity that correlates with the contour of the object, it seems a reasonable method, but this method has some drawbacks. The first problem of edge extraction is setting a threshold value of change to be regarded as an edge. Depending on the setting of this threshold, the number of extracted edges is greatly affected. Therefore, when actually performing edge extraction, it is necessary to search for an appropriate threshold value by changing the threshold value. This is a very large computational load and is a problem of edge extraction. In the case of images, even if the subject is the same, it is natural that the amount of light changes depending on the captured image. If the threshold value is fixed, a problem occurs in the performance of edge extraction.
 また、エッジ抽出の第2の問題は、エッジは微妙な光の加減で、発生したり、消失したりするので、輪郭の途中でエッジが無くなったり、物体の輪郭とは関係ないところに誤差エッジが出現したりすることである。したがって、途切れた輪郭を推定するアルゴリズムや、誤差エッジを無視するアルゴリズムを考える必要がある。これも非常に大きな問題である。 In addition, the second problem of edge extraction is that edges are generated or lost due to subtle adjustments of light, so there are no edges in the middle of the contour, or error edges are not related to the contour of the object. Is to appear. Therefore, it is necessary to consider an algorithm for estimating a broken outline and an algorithm for ignoring error edges. This is also a very big problem.
 そして、エッジ抽出の第3の問題は、エッジ抽出後は、物体の存在に関する情報が消失してしまうことである。エッジ抽出をかけた後の画像は、エッジのみの画像になってしまい、どの領域が物体として同一なのかに関する情報が無くなってしまう。エッジのみを頼りに物体を特定するのは非常に難しい。さらに、前述のように、途切れた輪郭が頻繁に発生する状態の中で行わなければいけないので、なおさら困難である。このように、エッジ抽出は簡便な方法でよく用いられているが、非常に多くの問題を有した方法であると言える。 The third problem of edge extraction is that information about the presence of an object is lost after edge extraction. The image after the edge extraction is an edge-only image, and information regarding which region is the same as the object is lost. It is very difficult to specify an object by relying only on edges. Furthermore, as described above, since it must be performed in a state in which broken contours frequently occur, it is even more difficult. As described above, edge extraction is often used as a simple method, but it can be said that it has a great number of problems.
 ところで、物体と思われるものを抽出しようとした場合、本来、物体がありそうな位置を特定し、その後に輪郭を抽出するほうが自然である。これは、エッジ抽出とは逆の考え方であるが、このような考え方は従来から存在し、セグメンテーション、クラスタリングあるいはラベリングという技術がそれに当たる。このセグメンテーションという技術は、ある特徴量を用いて、その特徴量に設定する閾値により、画素をクラスタ化していくものである。ここで、クラスタとは、画素の集合のことを言う。この技術を使えば、画像に映った個々の物体をクラスタ化して分けることも可能に思えるが、生成されるクラスタは閾値の設定に大きく依存するため、閾値の設定次第で満足できる結果が得られないというのが現状である。但し、物体がありそうな位置はクラスタとして表現されているので、エッジ抽出よりは物体のありそうな位置を特定しやすいという利点を持っている。 By the way, when trying to extract what seems to be an object, it is more natural to identify the position where the object is likely to exist and then extract the contour. This is the opposite of edge extraction, but such a concept has existed in the past, and techniques such as segmentation, clustering, or labeling correspond to this. This technique called segmentation uses a certain feature amount to cluster pixels according to a threshold value set for the feature amount. Here, a cluster refers to a set of pixels. If this technology is used, it seems that it is possible to divide the individual objects shown in the image into clusters, but since the generated clusters largely depend on the threshold setting, satisfactory results can be obtained depending on the threshold setting. There is no current situation. However, since the position where the object is likely to be expressed is expressed as a cluster, it has an advantage that the position where the object is likely to be specified is easier than the edge extraction.
 セグメンテーションに関しては、さらに、色に関する問題もある。基本的に、白黒の濃淡でセグメンテーションする技術は非常に多いが、画像が多色だった場合、どのようにクラスタ化するかに関しては非常に困難である。なぜならば、白黒画像の場合、濃淡という一次元の指標があるので、濃度をヒストグラム化し、ヒストグラムの山がうまく2分割、3分割できる値に閾値を設定すれば、クラスタ化は可能であるが、多色の場合、どのようにヒストグラムを作るべきか判然としない。したがって、多色の画像に関して、有効なセグメンテーションを提供するのは、白黒の場合に比べて難しい。 Regarding segmentation, there are also problems with color. Basically, there are a lot of techniques for segmentation with black and white shading, but if the image is multicolored, it is very difficult to cluster. This is because, in the case of a black and white image, there is a one-dimensional index called shading, so clustering is possible if the density is histogrammed and the threshold is set to a value that allows the peaks of the histogram to be divided into two and three. In the case of multiple colors, it is unclear how to create a histogram. Therefore, it is more difficult to provide effective segmentation for multicolor images than for black and white.
 特許文献1には、画像テンプレートマッチングを行う方法が開示されている。この方法では、画像に、DOG(Difference Of Gaussian)フィルタをかけた後、2つの閾値th1、th2を設定し、DOG出力が、th2より小さい場合を-1に、th2~th1の場合を0に、th1より大きい場合を+1に、3値化する処理を行った後、パターンマッチングを行っている。しかしながら、このような処理方法の場合、処理結果は、閾値th1、th2の設定によって変化する。 Patent Document 1 discloses a method for performing image template matching. In this method, after applying a DOG (Difference Of Gaussian) filter to the image, two thresholds th1 and th2 are set, and when the DOG output is smaller than th2, it is set to -1, and when th2 to th1 is set to 0. The pattern matching is performed after the process of ternarizing to +1 when the value is larger than th1. However, in the case of such a processing method, the processing result changes depending on the settings of the thresholds th1 and th2.
 特許文献2には、画像から抽出したい被写体領域に大まかな指示を与え、被写体を抽出する処理方法について開示されている。この方法では、指示した領域から初期核を設定し、初期核の代表色を抽出する。その後、領域成長の閾値を更新しながら、被写体の輪郭線を抽出し、その内部領域画像を切り出している。しかしながら、この方法の場合も、代表色の基準の設定や、領域成長の閾値により、処理結果は変化する。 Patent Document 2 discloses a processing method for giving a rough instruction to a subject area to be extracted from an image and extracting the subject. In this method, an initial nucleus is set from the designated area, and a representative color of the initial nucleus is extracted. Then, while updating the region growth threshold, the contour line of the subject is extracted and the internal region image is cut out. However, even in this method, the processing result varies depending on the setting of the reference for the representative color and the threshold value for area growth.
 特許文献3は、画像データから得られた複数の特徴量に基づいて、領域分割を行う方法について開示されている。この方法では、複数の特徴量として、例えば、輝度色情報とテクスチャ情報を求めて、各々の領域分割結果の組み合わせに対して、領域統合処理を行っている。この方法の中で、微小領域の統合という処理が行われているが、ここでは、ある閾値を設定し、その閾値よりも小さい場合には、微小領域と判断し、他の領域に統合するという処理を行っている。しかしながら、この方法の場合も、微小領域を判別するのに用いられる閾値により、処理結果は変化する。 Patent Document 3 discloses a method for performing region division based on a plurality of feature amounts obtained from image data. In this method, for example, luminance color information and texture information are obtained as a plurality of feature amounts, and region integration processing is performed for each combination of region division results. In this method, a process of integration of a minute area is performed. Here, a certain threshold value is set, and when the threshold value is smaller than the threshold value, it is determined as a minute area and integrated into another area. Processing is in progress. However, also in this method, the processing result changes depending on the threshold value used to determine the minute region.
特開平06-076062号公報Japanese Patent Laid-Open No. 06-076062 特開2001-043376号公報JP 2001-043376 A 特開2004-258752号公報JP 2004-258752 A
 上記特許文献及び非特許文献の全開示内容はその引用をもって本書に繰り込み記載する。以下の分析は、本発明によって与えられたものである。 The entire disclosures of the above patent documents and non-patent documents are incorporated herein by reference. The following analysis is given by the present invention.
 前述のように、特許文献1、2、3の場合、いずれも、閾値または基準値の設定により、処理結果が変わるという問題がある。ところで、一般に、被写体の内容や撮影条件が限定されている場合には、閾値の設定を最適化することによって、被写体抽出、輪郭抽出の性能を確保することができる可能性はある。しかしながら、何が映っているか分からない画像や撮影条件が様々な画像から、物体を特定したり、輪郭を抽出したりすることは非常に困難である。被写体の内容が限定されておらず、撮影条件も様々な画像に対して、同じ閾値設定で処理したのでは、満足のいく処理結果が得られないケースが多く発生するからである。特に、光の加減により撮影された画像の明るさが変わった場合、同一被写体であっても、従来の方法では、安定して、同じ抽出結果を得ることが難しい傾向がある。 As described above, in each of Patent Documents 1, 2, and 3, there is a problem that the processing result changes depending on the setting of the threshold value or the reference value. By the way, in general, when the subject content and shooting conditions are limited, there is a possibility that the performance of subject extraction and contour extraction can be ensured by optimizing the threshold setting. However, it is very difficult to specify an object or extract a contour from an image in which what is shown or an image with various shooting conditions. This is because there are many cases where satisfactory processing results cannot be obtained if images with various subject conditions are not limited and images with various shooting conditions are processed with the same threshold setting. In particular, when the brightness of an image taken by adjusting light is changed, even with the same subject, it is difficult to obtain the same extraction result stably with the conventional method.
 即ち、従来技術は、被写体の内容が限定されておらず、撮影条件も様々である撮影画像に対して、被写体抽出や輪郭抽出を処理したとき、処理結果が、閾値の設定に依存して変わり、ロバストな処理結果が得られないという問題点がある。 That is, according to the prior art, when subject extraction and contour extraction are processed for a captured image in which the subject content is not limited and the shooting conditions vary, the processing result changes depending on the threshold setting. There is a problem that a robust processing result cannot be obtained.
 本発明は、閾値設定を必要としない画像分割方法を提供することにより、画像データから物体をロバストに抽出することを可能にすることを目的とする。 An object of the present invention is to make it possible to robustly extract an object from image data by providing an image dividing method that does not require a threshold setting.
 本発明の第1の視点によれば、画像を画素の集合であるクラスタに分割する画像分割処理部を有し、前記画像分割処理部は、画像に含まれる各画素をそれぞれ着目画素として、前記着目画素に隣接する画素の中で、特徴量が前記着目画素に最も近い画素と同一のクラスタとすることにより、画像を分割する画像処理装置が提供される。 According to a first aspect of the present invention, there is provided an image division processing unit that divides an image into clusters that are sets of pixels, and the image division processing unit uses each pixel included in the image as a pixel of interest, and An image processing apparatus is provided that divides an image by making the same cluster as a pixel having a feature amount closest to the pixel of interest among pixels adjacent to the pixel of interest.
 本発明の第2の視点によれば、画像を画素の集合であるクラスタに分割する画像分割方法であって、前記画像に含まれる画素のうち一の画素を着目画素として前記着目画素と隣接する画素の中で、特徴量が前記着目画素に最も近い画素を選択する隣接画素選択ステップと、前記着目画素を前記隣接画素選択ステップで選択された画素と同一のクラスタとする隣接画素間同一クラスタ化ステップと、を含み、前記隣接画素選択ステップと、前記隣接画素間同一クラスタ化ステップと、を前記画像に含まれる各画素を前記着目画素として繰り返す画像分割方法が提供される。 According to a second aspect of the present invention, there is provided an image dividing method for dividing an image into clusters that are a set of pixels, wherein one of the pixels included in the image is a pixel of interest and is adjacent to the pixel of interest. Among adjacent pixels, a neighboring pixel selection step that selects a pixel having a feature quantity closest to the target pixel, and a clustering between adjacent pixels in which the target pixel is the same cluster as the pixel selected in the adjacent pixel selection step And an adjacent pixel selection step and the same clustering step between adjacent pixels are repeated using each pixel included in the image as the pixel of interest.
 本発明の第1の視点によれば、画像データから物体をロバストに抽出することが可能な画像処理装置を提供することが可能になる。その理由は、着目画素を、隣接する画素の中で特徴量が最も近い画素と同一のクラスタとすることにより画像分割するようにし、閾値処理を行っていないため、ロバストに画像分割処理ができるようになったからである。 According to the first aspect of the present invention, it is possible to provide an image processing apparatus capable of robustly extracting an object from image data. The reason is that the image is divided by making the pixel of interest the same cluster as the pixel having the closest feature quantity among the adjacent pixels, and the threshold processing is not performed, so that the image division processing can be performed robustly. Because it became.
 本発明の第2の視点によれば、画像データから物体をロバストに抽出することが可能な画像分割方法を提供することが可能になる。その理由は、着目画素と隣接する画素の中で特徴量が最も近い画素を選択し、着目画素のクラスタを選択された画素と同一のクラスタとするようにして画像分割を行い、閾値処理を行っていないため、ロバストに画像分割処理ができるようになったからである。 According to the second aspect of the present invention, it is possible to provide an image dividing method capable of robustly extracting an object from image data. The reason is that the pixel with the closest feature quantity is selected from the pixels adjacent to the target pixel, and the image is divided so that the cluster of the target pixel is the same cluster as the selected pixel, and threshold processing is performed. This is because the image division processing can be performed robustly.
本発明の実施例1の画像処理装置全体のブロック図である。1 is a block diagram of an entire image processing apparatus according to a first embodiment of the present invention. 本発明の画像分割処理の原理を説明する図である。It is a figure explaining the principle of the image division process of this invention. 本発明の画像分割方法を示すフローチャートである。It is a flowchart which shows the image division method of this invention. 本発明を実施した画像処理結果を説明する図である。It is a figure explaining the image processing result which implemented this invention. 本発明を実施例1で実施した画像の一例である。It is an example of the image which implemented this invention in Example 1. FIG. 本発明の実施例1を説明するためのフローチャートである。It is a flowchart for demonstrating Example 1 of this invention. 本発明の実施例1の画像処理結果を説明する図である。It is a figure explaining the image processing result of Example 1 of this invention. 本発明の実施例1の画像処理結果を説明する図である。It is a figure explaining the image processing result of Example 1 of this invention. 本発明の実施例1の画像処理結果を説明する図である。It is a figure explaining the image processing result of Example 1 of this invention. 本発明の実施例1で実施した画像の一例である。It is an example of the image implemented in Example 1 of this invention. 本発明の実施例1の画像処理結果を説明する図である。It is a figure explaining the image processing result of Example 1 of this invention. 本発明の実施例2の画像処理装置全体のブロック図である。It is a block diagram of the whole image processing apparatus of Example 2 of this invention. 本発明を実施例2で実施した画像の一例である。It is an example of the image which implemented this invention in Example 2. FIG. 本発明の実施例2を説明するためのフローチャートである。It is a flowchart for demonstrating Example 2 of this invention. 本発明の実施例2の画像処理結果を説明する図である。It is a figure explaining the image processing result of Example 2 of this invention.
 本発明の実施形態の概要について説明する。なお、この概要に付記した図面参照符号は専ら理解を助けるための例示であり、図示の態様に限定することを意図するものではない。 The outline of the embodiment of the present invention will be described. Note that the reference numerals of the drawings attached to this summary are merely examples for assisting understanding, and are not intended to be limited to the illustrated modes.
 本発明において下記の形態が可能である。
[形態1]
 第1の視点のとおり、画像を画素の集合であるクラスタに分割する画像分割処理部を有し、前記画像分割処理部は、画像に含まれる各画素をそれぞれ着目画素として、前記着目画素に隣接する画素の中で、特徴量が前記着目画素に最も近い画素と同一のクラスタとすることにより、画像を分割する画像処理装置が提供される。形態1において、着目画素を、隣接する画素の中で特徴量が最も近い画素と同一のクラスタとすることにより画像分割するように構成することで、閾値処理を行っていないため、画像データから物体をロバストに抽出することが可能な画像処理装置を提供することが可能になる。
[形態2]
 前記特徴量は、色情報であることが好ましい。
[形態3]
 前記画像のRGB表色系による成分を、R、G、Bとしたとき、前記色情報は、3つの成分が、2R-G-B、2G-R-B、2B-R-Gからなるベクトルにより表されることが好ましい。
[形態4]
 前記画像のRGB表色系による成分を、R、G、Bとし、αをある係数としたとき、前記特徴量は、4つの成分が、2R-G-B、2G-R-B、2B-R-G、α(R+G+B)からなるベクトルにより表されることが好ましい。
[形態5]
 前記画像分割処理部で形成したクラスタの境界を抽出する輪郭抽出部をさらに有することが好ましい。
[形態6]
 第2の視点のとおり、画像を画素の集合であるクラスタに分割する画像分割方法であって、前記画像に含まれる画素のうち一の画素を着目画素として前記着目画素と隣接する画素の中で、特徴量が前記着目画素に最も近い画素を選択する隣接画素選択ステップと、前記着目画素を前記隣接画素選択ステップで選択された画素と同一のクラスタとする隣接画素間同一クラスタ化ステップと、を含み、前記隣接画素選択ステップと、前記隣接画素間同一クラスタ化ステップと、を前記画像に含まれる各画素を前記着目画素として繰り返す画像分割方法が提供される。
[形態7]
 前記特徴量は、色情報であることが好ましい。
[形態8]
 前記画像のRGB表色系による成分を、R、G、Bとしたとき、前記色情報は、3つの成分が、2R-G-B、2G-R-B、2B-R-Gからなるベクトルにより表されることが好ましい。
[形態9]
 前記画像のRGB表色系による成分を、R、G、Bとし、αをある係数としたとき、前記特徴量は、4つの成分が、2R-G-B、2G-R-B、2B-R-G、α(R+G+B)からなるベクトルにより表されることが好ましい。
[形態10]
 前記クラスタの境界を抽出するステップをさらに含むことが好ましい。
In the present invention, the following modes are possible.
[Form 1]
As in the first viewpoint, the image division processing unit divides an image into clusters that are sets of pixels, and the image division processing unit uses each pixel included in the image as a target pixel and is adjacent to the target pixel. Among the pixels to be processed, an image processing apparatus is provided that divides an image by using the same cluster as a pixel having a feature amount closest to the pixel of interest. In the first embodiment, since the pixel of interest is configured to be divided into images by making the same cluster as the pixel having the closest feature quantity among the adjacent pixels, threshold processing is not performed, so that the object from the image data It is possible to provide an image processing apparatus that can extract the image robustly.
[Form 2]
The feature amount is preferably color information.
[Form 3]
When the RGB color system components of the image are R, G, and B, the color information is a vector in which the three components are 2R-GB, 2G-RB, and 2B-RG. Is preferably represented by:
[Form 4]
When the components in the RGB color system of the image are R, G, and B, and α is a certain coefficient, the feature amount includes two components: 2R-GB, 2G-RB, 2B- It is preferably represented by a vector consisting of RG and α (R + G + B).
[Form 5]
It is preferable to further include a contour extracting unit that extracts a boundary between clusters formed by the image division processing unit.
[Form 6]
An image division method for dividing an image into clusters, which are a set of pixels, as in the second viewpoint, wherein one pixel among the pixels included in the image is a pixel of interest, and is adjacent to the pixel of interest. An adjacent pixel selecting step for selecting a pixel whose feature amount is closest to the target pixel; and an adjacent pixel same clustering step in which the target pixel is the same cluster as the pixel selected in the adjacent pixel selection step. In addition, there is provided an image dividing method in which the adjacent pixel selection step and the same clustering step between adjacent pixels are repeated using each pixel included in the image as the pixel of interest.
[Form 7]
The feature amount is preferably color information.
[Form 8]
When the RGB color system components of the image are R, G, and B, the color information is a vector in which the three components are 2R-GB, 2G-RB, and 2B-RG. Is preferably represented by:
[Form 9]
When the components in the RGB color system of the image are R, G, and B, and α is a certain coefficient, the feature amount includes two components: 2R-GB, 2G-RB, 2B- It is preferably represented by a vector consisting of RG and α (R + G + B).
[Mode 10]
Preferably, the method further includes a step of extracting a boundary of the cluster.
 以下に具体的な実施の形態について、図面を参照して説明する。本発明の第1の実施形態の画像処理装置8は、図1に示すように、画像を画素の集合であるクラスタに分割する画像分割処理部2を有し、画像分割処理部2は、画像に含まれる各画素をそれぞれ着目画素として、着目画素に隣接する画素の中で、特徴量が着目画素に最も近い画素と同一のクラスタとすることにより、画像を分割する。 Hereinafter, specific embodiments will be described with reference to the drawings. As shown in FIG. 1, the image processing apparatus 8 according to the first embodiment of the present invention includes an image division processing unit 2 that divides an image into clusters that are sets of pixels. The image is divided by setting each pixel included in the pixel as the target pixel to be the same cluster as the pixel having the feature amount closest to the target pixel among the pixels adjacent to the target pixel.
 本発明の第2の実施形態の画像分割方法は、図3に示すように、画像を画素の集合であるクラスタに分割する画像分割方法であって、画像に含まれる画素のうち一の画素を着目画素として着目画素と隣接する画素の中で、特徴量が着目画素に最も近い画素を選択する隣接画素選択ステップS13と、着目画素を隣接画素選択ステップS13で選択された画素と同一のクラスタとする隣接画素間同一クラスタ化ステップS14と、を含み、隣接画素選択ステップと、隣接画素間同一クラスタ化ステップと、を画像に含まれる各画素を着目画素として繰り返す。 The image division method according to the second embodiment of the present invention is an image division method for dividing an image into clusters, which are a set of pixels, as shown in FIG. Among the pixels adjacent to the target pixel as the target pixel, an adjacent pixel selection step S13 for selecting a pixel whose feature quantity is closest to the target pixel, and the same cluster as the pixel selected as the target pixel in the adjacent pixel selection step S13 The adjacent pixel same clustering step S14 is repeated, and the adjacent pixel selection step and the adjacent pixel same clustering step are repeated with each pixel included in the image as the pixel of interest.
 ここで、本発明の第1の実施形態及び第2の実施形態における特徴量について、説明する。特徴量は画像の性質を表すものであれば何でも適用可能であるが、特徴量の最適例としては、色情報が考えられる。その理由は、被写体に当たる光量の加減や、シャッター速度、露出、センサー感度などの撮影条件により、撮影された画像の明るさ情報は、変化するが、色情報は、それらの影響を受けにくく、同じ状態を維持している傾向にあるため、被写体抽出をロバストに行うのに適したパラメータであるからである。但し、色情報だけでは、色情報が同じで、反射率の異なる被写体の違いを区別することができない場合があるので、明るさ情報も適宜加味することが望ましい。 Here, feature amounts in the first embodiment and the second embodiment of the present invention will be described. Any feature value can be applied as long as it represents the nature of the image, but color information is considered as an optimal example of the feature value. The reason is that the brightness information of the captured image changes depending on the shooting conditions such as the amount of light hitting the subject, shutter speed, exposure, sensor sensitivity, etc., but the color information is not easily affected by the same. This is because the parameter is suitable for robust subject extraction because it tends to maintain the state. However, since the color information alone may not be able to distinguish between subjects having the same color information and different reflectivities, it is desirable to appropriately add brightness information.
 次に、色情報を表現するのに適した方式として、従来から広く行われている色の表現方法であるRGB方式、HSV方式について、それぞれの特徴と問題点について分析する。まず、RGB方式は、光の3原色の出力で色を表現したものである。この表現は、デバイスの出力にダイレクトに相当していることから色の表現としては非常に便利である。また、人間の色覚の検出器の構造にも近い。しかしながら、RGB方式には、色と色の差異が色相、彩度、明度のどれに対応しているのかが、わからないという欠点がある。 Next, as a method suitable for expressing color information, the characteristics and problems of each of the RGB method and HSV method, which are widely used color expression methods, are analyzed. First, the RGB method expresses colors with the output of the three primary colors of light. This representation is very convenient as a color representation because it directly corresponds to the output of the device. It is also close to the structure of a human color vision detector. However, the RGB method has a drawback that it does not know which hue, saturation, or brightness corresponds to the difference between colors.
 一方、HSV方式は、その問題を解決している。HSV方式では、色を色相H、彩度S、明度Vの3つのパラメータで色を表現する。RGB方式からHSV方式への変換は、以下の式(1)、(2)、(3)で表される。 On the other hand, the HSV method solves that problem. In the HSV method, a color is expressed by three parameters of hue H, saturation S, and brightness V. Conversion from the RGB system to the HSV system is expressed by the following equations (1), (2), and (3).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 しかしながら、式(1)から、MAX=MINのときには色相Hは定義できない。また、式(2)から、MAX=0のときには、彩度Sは定義できない。また、色相Hは、一様に連続になっていない。このように、HSV方式を実際に使用する場合には、特異点や連続性を考慮しなければならず、複雑な処理を行わなければならないという問題がある。本発明では、そのような状況を鑑みて、HSVに比べて簡便な色情報の表現方法を考案した。それは、式(4)の3次元ベクトル、または、式(5)の4次元ベクトルを使用するものである。 However, from equation (1), hue H cannot be defined when MAX = MIN. Further, from the equation (2), when MAX = 0, the saturation S cannot be defined. Further, the hue H is not uniformly continuous. As described above, when the HSV method is actually used, there is a problem that singularities and continuity must be taken into consideration and complicated processing must be performed. In view of such a situation, the present invention devised a simple color information expression method compared to HSV. It uses the three-dimensional vector of equation (4) or the four-dimensional vector of equation (5).
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 式(4)、式(5)では、デバイスの出力にダイレクトに対応しているRGB出力の簡便性、利便性を重視し、かつ、HSV方式における色相H、彩度Sに相当する特性が、線形で連続に表現できるようにした。式(4)、式(5)で定義されたベクトルを、本明細書では、色ベクトルと呼ぶ。 In the equations (4) and (5), importance is attached to the convenience and convenience of RGB output that directly corresponds to the output of the device, and the characteristics corresponding to the hue H and the saturation S in the HSV method are as follows: Added linear and continuous expression. In the present specification, the vectors defined by Expression (4) and Expression (5) are referred to as color vectors.
 式(4)、式(5)の色ベクトルにおける各要素の順番は任意であり、交換可能である。すなわち、要素を入れ替えて定義しても、同じ処理結果が得られる。また、式(4)は、色情報のみを表しているが、式(5)は、式(4)に対して、明るさ情報を表す第4の成分を追加している。ここで、式(5)におけるαは係数である。色情報のみを用いる場合には、式(4)を使用すればよいが、明るさ情報に関する情報も考慮したい場合には、式(5)を用いればよい。また、色情報と明るさ情報の重み付けは、係数αで調整すればよい。αを大きな値に設定すれば、明るさ情報の寄与率は増加し、αを小さな値にすれば、明るさ情報の寄与率は減少する。 The order of each element in the color vectors of Equation (4) and Equation (5) is arbitrary and can be exchanged. That is, the same processing result can be obtained even if the elements are defined interchangeably. Further, Expression (4) represents only color information, but Expression (5) adds a fourth component representing brightness information to Expression (4). Here, α in Equation (5) is a coefficient. If only color information is used, equation (4) may be used, but if information about brightness information is also considered, equation (5) may be used. Further, the weighting of the color information and the brightness information may be adjusted by the coefficient α. If α is set to a large value, the contribution rate of brightness information increases, and if α is set to a small value, the contribution rate of brightness information decreases.
 以下に実施例について詳述する。 The embodiment will be described in detail below.
[実施例1の構成]
 図1に本発明の実施例1の画像処理装置全体のブロック図を示す。まず、画像取得部1が画像を取得する。画像は画像処理装置がCCDを内蔵し取得しても良いし、外部に保存されている画像を取り出して、画像取得部1に供給しても良い。画像取得部1で取得された画像は、画像分割処理部2に入力され、画像分割処理部2の画像分割処理により、クラスタに分割される。次に、画像分割処理部2でクラスタに分割された結果は、輪郭抽出部3に入力され、クラスタの輪郭が抽出される。次に、クラスタの輪郭は、輪郭認識部4に入力される。輪郭認識部4では、予め、多くの輪郭と、物体情報の関係が保存されている輪郭データ保存部5のデータを参照することによって、輪郭抽出部3で得られた輪郭データから、物体の認識結果を出力する。
[Configuration of Example 1]
FIG. 1 is a block diagram of the entire image processing apparatus according to the first embodiment of the present invention. First, the image acquisition unit 1 acquires an image. The image may be acquired by the image processing apparatus with a built-in CCD, or an image stored outside may be taken out and supplied to the image acquisition unit 1. The image acquired by the image acquisition unit 1 is input to the image division processing unit 2 and is divided into clusters by the image division processing of the image division processing unit 2. Next, the result of the division into clusters by the image division processing unit 2 is input to the contour extraction unit 3, and the contours of the clusters are extracted. Next, the contour of the cluster is input to the contour recognition unit 4. The contour recognition unit 4 recognizes an object from the contour data obtained by the contour extraction unit 3 by referring to data of the contour data storage unit 5 in which the relationship between many contours and object information is stored in advance. Output the result.
[実施例1の動作]
 次に、実施例1の動作について、図6を参照しながら説明する。図6は、実施例1を説明するためのフローチャートである。まず、ステップS21で、画像を取得する。次に、ステップS22で、画素間またはクラスタ間のどちらで処理するかを決定する。次に、ステップS22において画素間で処理すると決定された場合は、ステップS23の隣接画素間同一クラスタ化による画像分割を行い、一方、クラスタ間で処理すると決定された場合は、ステップS28の隣接クラスタ間同一クラスタ化による画像分割を行う。次に、ステップS24で、クラスタ化された結果に対し、輪郭抽出部3の輪郭抽出処理を行う。次に、ステップS25で、輪郭認識部4の輪郭認識処理を行う。ここでは、予め、輪郭データ保存部5に保存してある輪郭データベースとのマッチング処理を行い、物体の認識結果を出力する。次に、ステップS26で、認識結果に満足かどうかの判別を行い、Noと判別された場合は、ステップS29で、輪郭認識ができたクラスタを処理対象からはずし、ステップS22に戻り、残った処理対象のクラスタに対して、ステップS22以降の処理を繰り返す。一方、Yesと判別された場合は、処理を終了する。ここで、ステップS22では、通常、繰り返し処理の1回目では、画素間の処理を選択し、繰り返し処理の2回目以降では、クラスタ間の処理を選択する。
[Operation of Embodiment 1]
Next, the operation of the first embodiment will be described with reference to FIG. FIG. 6 is a flowchart for explaining the first embodiment. First, in step S21, an image is acquired. Next, in step S22, it is determined whether to process between pixels or between clusters. Next, if it is decided to process between pixels in step S22, the image is divided by the same clustering between adjacent pixels in step S23. On the other hand, if it is decided to process between clusters, the adjacent cluster in step S28 is determined. The image is divided by the same clustering. Next, in step S24, the contour extraction processing of the contour extraction unit 3 is performed on the clustered result. Next, in step S25, the contour recognition process of the contour recognition unit 4 is performed. Here, matching processing with a contour database stored in the contour data storage unit 5 in advance is performed, and an object recognition result is output. Next, in step S26, it is determined whether or not the recognition result is satisfactory. If it is determined as No, in step S29, the cluster whose contour has been recognized is removed from the processing target, and the process returns to step S22 and the remaining processing. The processes after step S22 are repeated for the target cluster. On the other hand, if it is determined Yes, the process is terminated. Here, in step S22, generally, the process between pixels is selected in the first iteration process, and the process between clusters is selected in the second and subsequent iteration processes.
 次に、図3は、図6のステップS23で行われる隣接画素間同一クラスタ化による画像分割の詳細を示すフローチャートである。まず、ステップS11で、処理する画素を選択する。本明細書では、処理する画素のことを着目画素と呼ぶ。 Next, FIG. 3 is a flowchart showing details of image division by the same clustering between adjacent pixels performed in step S23 of FIG. First, in step S11, a pixel to be processed is selected. In this specification, a pixel to be processed is referred to as a target pixel.
 次に、図3のステップS12で、隣接画素との色差計算を行う。以下に、ステップS12について、詳細に説明する。本発明の実施例1では、画像分割処理部2のクラスタリングに用いる特徴量として、式(5)で示した色ベクトルを用いている。図2に示す着目画素と、着目画素に隣接する8つの画素における色ベクトルを算出する。ここで、着目画素の色ベクトルをEV0とし、着目画素に隣接する8つの画素の色ベクトルを、各々、EV1、EV2、EV3、EV4、EV5、EV6、EV7、EV8とする。次に、式(6)により、着目画素と8つの隣接画素の色ベクトルの差、すなわち色差ΔEVi(i=1、2、.....、8)を算出する。ここで、ΔEViも、4次元ベクトルである。次に、色差ΔEViのノルムΔEVi_NORMをさらに計算する。ここで、ノルムは、ベクトルの大きさであり、4つの成分の二乗和平方根である。 Next, in step S12 in FIG. 3, the color difference between adjacent pixels is calculated. Hereinafter, step S12 will be described in detail. In the first embodiment of the present invention, the color vector represented by Expression (5) is used as the feature amount used for clustering of the image division processing unit 2. Color vectors are calculated for the target pixel shown in FIG. 2 and the eight pixels adjacent to the target pixel. Here, the color vector of the target pixel is EV0, and the color vectors of the eight pixels adjacent to the target pixel are EV1, EV2, EV3, EV4, EV5, EV6, EV7, and EV8, respectively. Next, the difference between the color vectors of the pixel of interest and the eight adjacent pixels, that is, the color difference ΔEVi (i = 1, 2,..., 8) is calculated by Expression (6). Here, ΔEVi is also a four-dimensional vector. Next, the norm ΔEVi_NORM of the color difference ΔEVi is further calculated. Here, the norm is the magnitude of the vector and the square sum of the squares of the four components.
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
 次に、図3のステップS13について説明する。ステップS13は、隣接画素選択ステップである。ステップS13では、まず、ステップS12で計算された8つのノルムΔEVi_NORMの中で、最も小さい値となる方向をサーチし、i_minとする。これにより、着目画素が、i_minの方向の隣接画素と連続しているとみなされる。図2を参照するとわかるように、i_min=1の場合には、着目画素は右の画素と連続しているとみなされ、i_min=2の場合には、着目画素は右上の画素と連続しているとみなされ、i_min=3の場合には、着目画素は上の画素と連続しているとみなされ、i_min=4の場合には、着目画素は左上の画素と連続しているとみなされ、i_min=5の場合には、着目画素は左の画素と連続しているとみなされ、i_min=6の場合には、着目画素は左下の画素と連続しているとみなされ、i_min=7の場合には、着目画素は下の画素と連続しているとみなされ、i_min=8の場合には、着目画素は右下の画素と連続しているとみなされる。 Next, step S13 in FIG. 3 will be described. Step S13 is an adjacent pixel selection step. In step S13, first, among the eight norms ΔEVi_NORM calculated in step S12, the direction having the smallest value is searched for i_min. Thereby, the pixel of interest is considered to be continuous with the adjacent pixel in the i_min direction. As can be seen from FIG. 2, when i_min = 1, the target pixel is considered to be continuous with the right pixel, and when i_min = 2, the target pixel is continuous with the upper right pixel. If i_min = 3, the pixel of interest is considered to be continuous with the upper pixel, and if i_min = 4, the pixel of interest is considered to be continuous with the upper left pixel. , I_min = 5, the target pixel is considered to be continuous with the left pixel, and i_min = 6, the target pixel is considered to be continuous with the lower left pixel, and i_min = 7 In this case, the target pixel is considered to be continuous with the lower pixel. When i_min = 8, the target pixel is considered to be continuous with the lower right pixel.
 次に、図3のステップS14について説明する。ステップS14は、隣接画素間同一クラスタ化ステップである。着目画素のクラスタを、ステップS13で得られた最も色差が小さい隣接画素の方向i_minの画素と同一のクラスタに置き換える。もし、最も色差が小さくなるのが複数の画素となった場合は、それらすべてを同一クラスタに属する画素としても良いし、その中からどれかを選ぶようにしても良い。 Next, step S14 in FIG. 3 will be described. Step S14 is the same clustering step between adjacent pixels. The cluster of the pixel of interest is replaced with the same cluster as the pixel in the direction i_min of the adjacent pixel having the smallest color difference obtained in step S13. If a plurality of pixels have the smallest color difference, all of them may be pixels belonging to the same cluster, or one of them may be selected.
 次に、図3のステップS15で、全ての画素に対して処理をしたかどうかを判定し、NOの場合には、ステップS11に戻り、処理を繰り返す。一方、YESに判定された場合には、処理を終了する。以上のようにして、1つの画像に対し、クラスタリングが行われ、幾つかのクラスタからなる領域に分割される。 Next, in step S15 in FIG. 3, it is determined whether or not processing has been performed for all pixels. If NO, the process returns to step S11 to repeat the processing. On the other hand, when it determines with YES, a process is complete | finished. As described above, clustering is performed on one image, and the image is divided into regions composed of several clusters.
 以上のように、実施例1の動作を、図6のフローチャートで説明し、図6の中のステップS23の画像分割処理部の動作を、図3のフローチャートで、より詳細に説明した。 As described above, the operation of the first embodiment has been described with reference to the flowchart of FIG. 6, and the operation of the image division processing unit in step S23 in FIG. 6 has been described in more detail with reference to the flowchart of FIG.
 図4は、図3のフローチャートに示す方法で、画像分割処理を行った結果の例を示している。図4で、各画素に記載された番号は、クラスタの番号であり、同じクラスタは同じ番号を持つ。図4の物体Bの領域は、同じ1つのクラスタにならなければいけないが、クラスタ3とクラスタ4に分かれてしまっている。すなわち、クラスタ3とクラスタ4の間では、同一クラスタ化が、不十分な状態になっている。この原因は、あるクラスタの画素が、同じクラスタの画素を相手に選んでしまったことにより発生する。このような場合には、画素間で行ったクラスタリング処理で良好な結果が得られたクラスタを処理対象からはずした後、残った処理対象のクラスタに対して、クラスタ間でのクラスタリングを行えばよい。すると、図4のクラスタ3、クラスタ4は結合し、物体ごとに綺麗に分割されたクラスタリングが実現できる。図6に示したフローチャートを参照すると、ステップS22は、1回目の処理では、画素間の処理を選択し、2回目の処理では、クラスタ間の処理を選択することにより、図4で示した問題を解決している。 FIG. 4 shows an example of the result of image division processing by the method shown in the flowchart of FIG. In FIG. 4, the numbers described in each pixel are the cluster numbers, and the same clusters have the same numbers. The area of the object B in FIG. 4 must be the same cluster, but is divided into a cluster 3 and a cluster 4. That is, the same clustering is insufficient between the cluster 3 and the cluster 4. This cause occurs when a pixel of a certain cluster selects a pixel of the same cluster as the other party. In such a case, after removing a cluster that has obtained a good result from the clustering processing performed between pixels from the processing target, clustering between the clusters may be performed on the remaining processing target cluster. . Then, cluster 3 and cluster 4 in FIG. 4 are combined, and clustering can be realized in which each object is neatly divided. Referring to the flowchart shown in FIG. 6, in step S22, the process shown in FIG. 4 is selected by selecting a process between pixels in the first process and selecting a process between clusters in the second process. Has solved.
 ここで、クラスタ間のクラスタリング処理を示す図6のステップS28の隣接クラスタ間同一クラスタ化による画像分割は、図3の画素間の処理をクラスタ間の処理に置き換えたものである。図3において、クラスタ間の処理の場合には、図3のステップS13は、隣接クラスタ選択ステップとなり、ステップS14は、隣接クラスタ間同一クラスタ化ステップとなる。また、図3のステップS12における隣接クラスタの数は、8とは限らず、着目クラスタが隣接している数になる。 Here, the image division by the same clustering between adjacent clusters in step S28 in FIG. 6 showing the clustering process between clusters is obtained by replacing the process between pixels in FIG. 3 with the process between clusters. In FIG. 3, in the case of processing between clusters, step S13 in FIG. 3 is an adjacent cluster selection step, and step S14 is an identical clustering step between adjacent clusters. Further, the number of adjacent clusters in step S12 in FIG. 3 is not limited to 8, and is the number of adjacent clusters of interest.
 本発明の有効な点の1つは、クラスタに分けるときに閾値を用いないことである。各画素、各クラスタは隣接する画素またはクラスタのうち最も近い色差のものと結合していくだけなので、光の加減が変わっても相対的な関係が変わらない。つまり、光の加減が変わっても同じようにクラスタ化されていくのである。これは従来のセグメンテーション技術にない有効な特徴であり、従来のように何度も閾値を変えてセグメンテーションする必要がない。すなわち、光の加減に非常にロバストな手法ということができる。 One of the effective points of the present invention is that no threshold is used when dividing into clusters. Since each pixel and each cluster is only combined with the closest pixel difference among adjacent pixels or clusters, the relative relationship does not change even if the light intensity changes. In other words, even if the light intensity changes, it will be clustered in the same way. This is an effective feature not found in the conventional segmentation technique, and it is not necessary to change the threshold many times as in the conventional case. That is, it can be said that it is a very robust method for adjusting light.
 次に、画像処理の具体例を示すにより、実施例1の動作をさらに説明する。図5の画像は、丸い物体Dと、三角の物体Eと、背景の3つの領域から構成され、3つの領域は、各々異なる色情報及び明るさ情報を持っている。図5の画像のすべての画素に対して、画像分割処理部2で画像分割処理を実施すると、画像は色別にクラスタリングされる。図7にクラスタリングされた状態を示す。画素に記載された番号はクラスタの番号であり、同じクラスタは同じ番号を持つ。三角の物体Eの領域は同じ色であるにもかかわらず、違うクラスタ番号のクラスタが存在している。 Next, the operation of the first embodiment will be further described by showing a specific example of image processing. The image in FIG. 5 is composed of three areas of a round object D, a triangular object E, and a background, and each of the three areas has different color information and brightness information. When image division processing is performed on all the pixels of the image in FIG. 5 by the image division processing unit 2, the images are clustered by color. FIG. 7 shows a clustered state. The numbers described in the pixels are the cluster numbers, and the same cluster has the same number. Although the region of the triangular object E has the same color, clusters having different cluster numbers exist.
 この段階で、次の輪郭抽出部3にデータを送付する。輪郭抽出部3は、画像分割処理部2がクラスタリングした各クラスタの境界部を輪郭として抽出する。抽出された輪郭情報は、輪郭認識部4に送られる。輪郭認識部4では、輪郭データ保存部5に保存してある認識用の輪郭データベースと、輪郭抽出部3で抽出された輪郭情報をマッチングし、物体を特定する。 At this stage, the data is sent to the next contour extraction unit 3. The contour extraction unit 3 extracts a boundary portion of each cluster clustered by the image division processing unit 2 as a contour. The extracted contour information is sent to the contour recognition unit 4. The contour recognizing unit 4 matches the recognition contour database stored in the contour data storing unit 5 with the contour information extracted by the contour extracting unit 3 to specify an object.
 ここまでの処理で物体の認識は一旦完了するが、物体として特定できなかったクラスタもまだ残る場合がある。特に、図7に示したように、同じ色なのに、違うクラスタになってしまった部分は物体として特定される可能性は低い。図7に示すように、クラスタ2に関しては円形の輪郭が抽出され、輪郭認識部4で、丸い物体が認識される。しかしながら、クラスタ3、クラスタ4に分割された領域では、輪郭データベースとマッチングするものが見つかっていない状態である。そこで、輪郭認識部4で、認識できたクラスタ2を処理対象からはずし、残ったクラスタを処理対象として、再度、クラスタ化の処理に戻る。ここで、今度は画素同士のクラスタリングではなく、クラスタ同士のクラスタリングを選択し、クラスタ間でクラスタリングによるクラスタの結合を行う。その際、隣接するクラスタのうち最も色差が少ないクラスタと結合する。クラスタリング間の結合を行った結果を図8に示す。同じ色のクラスタがひとつのクラスタに結合されていることがわかる。その後の処理は、画素のときと同じで、輪郭抽出部、輪郭認識部が順次処理をし、物体を特定していく。今度は、三角の物体も認識できる。図9に、輪郭抽出された結果を示す。最終的に、丸い物体、三角の物体が認識され、特定されているのがわかる。このようにして、本発明により、閾値の設定を必要とせずに、画像は自律的に分割され、その後、非常に簡便に、物体を特定することができる。 <Through the processing so far, the recognition of the object is once completed, but there may still be a cluster that could not be identified as an object. In particular, as shown in FIG. 7, there is a low possibility that a part that is the same color but has a different cluster is identified as an object. As shown in FIG. 7, a circular outline is extracted for the cluster 2, and a round object is recognized by the outline recognition unit 4. However, in the area divided into the cluster 3 and the cluster 4, no matching with the contour database has been found. Therefore, the contour recognition unit 4 removes the recognized cluster 2 from the processing target, and returns to the clustering process again with the remaining cluster as the processing target. Here, instead of clustering pixels, clustering between clusters is selected, and clusters are joined by clustering between clusters. At this time, the adjacent clusters are combined with the cluster having the smallest color difference. FIG. 8 shows the result of the connection between the clustering. It can be seen that clusters of the same color are combined into one cluster. Subsequent processing is the same as that for pixels, and the contour extracting unit and contour recognizing unit sequentially process to identify the object. Now it can also recognize triangular objects. FIG. 9 shows the result of contour extraction. Finally, it can be seen that round objects and triangular objects are recognized and specified. In this way, according to the present invention, the image is autonomously divided without requiring the setting of a threshold value, and then the object can be specified very simply.
 図10は、暗い中で、図5と同じ被写体を撮影した画像である。図10の画像を画像分割処理部2で処理した場合の結果を図11に示す。暗い中にある物体でも容易に認識することができていることがわかる。これは本発明が閾値設定を必要とせず、光の加減に関わらずにロバストに動作できることに起因している。以上より、本発明が非常に簡便で、ロバストに物体の特定、輪郭の抽出、物体の認識が可能であることがわかった。 FIG. 10 is an image of the same subject as in FIG. 5 taken in the dark. FIG. 11 shows the result when the image division processing unit 2 processes the image of FIG. It can be seen that an object in the dark can be easily recognized. This is because the present invention does not require threshold setting and can operate robustly regardless of light. From the above, it has been found that the present invention is very simple and can robustly specify an object, extract a contour, and recognize an object.
[実施例2の構成]
 図12は、本発明の実施例2の画像処理全体のブロック図である。実施例2は、実施例1に対して、クラスタ結合部6、クラスタ結合用データ保存部7をさらに含んでいる。クラスタ結合部6は、画像分割処理部2と、輪郭抽出部3の間に配置され、画像分割処理部2によるクラスタリング結果を、クラスタ結合用データ保存部7に保存されているデータに基づいて、結合処理するように、構成されている。
[Configuration of Example 2]
FIG. 12 is a block diagram of the entire image processing according to the second embodiment of the present invention. The second embodiment further includes a cluster combination unit 6 and a cluster combination data storage unit 7 as compared with the first example. The cluster combination unit 6 is arranged between the image division processing unit 2 and the contour extraction unit 3, and the clustering result by the image division processing unit 2 is based on the data stored in the cluster combination data storage unit 7. It is configured to perform a join process.
[実施例2の動作]
 図14は、本発明の実施例2を説明するためのフローチャートである。図14は、実施例1の動作を示す図6に対して、ステップS27が追加されている。ステップS27は、ステップS23、ステップS28でクラスタリング処理した結果に対して、クラスタ間の結合関係等を利用し、クラスタを結合する処理を行っている。
[Operation of Embodiment 2]
FIG. 14 is a flowchart for explaining the second embodiment of the present invention. In FIG. 14, step S27 is added to FIG. 6 showing the operation of the first embodiment. In step S27, the clustering process is performed on the result of the clustering process in steps S23 and S28 by using the coupling relationship between the clusters.
 以上のように、実施例2では、最終的な物体の特定、輪郭の抽出は、クラスタ間の接続関係、包含関係を利用し、色の異なるクラスタも結合することを行っている。結合されたクラスタの周辺部を取り出すことで輪郭が抽出できる。抽出された輪郭は、画像処理装置が保有する輪郭の辞書とマッチングし、物体が何であるかを認識する。クラスタ間の接続、包含は多少の組み合わせ問題になるが、本発明の色別クラスタリングでは、物体の塊としての情報が保存されているので、処理は、複雑にはならない。この方法は、物体の塊としての情報が保存されないエッジから物体を特定したり、閾値を変えて何度もセグメンテーションをやりながら物体を特定したりする従来の方法に比べて簡便である。以上、本発明により、非常に簡便かつ有効に物体を特定したり、物体の輪郭を抽出したりすることができる。 As described above, in the second embodiment, the final object identification and contour extraction are performed by combining clusters having different colors using the connection relation and inclusion relation between clusters. The contour can be extracted by taking out the periphery of the combined cluster. The extracted contour is matched with a contour dictionary held by the image processing apparatus to recognize what the object is. The connection and inclusion between the clusters is somewhat a combination problem. However, in the clustering by color according to the present invention, information as an object lump is stored, so the processing is not complicated. This method is simpler than the conventional method of specifying an object from an edge where information as a mass of the object is not stored, or specifying an object while performing segmentation many times by changing a threshold value. As described above, according to the present invention, an object can be specified and the contour of the object can be extracted very simply and effectively.
 図13は、人型の物体を示し、顔、腕(右)、腕(左)、Tシャツ、ズボン(右)、ズボン(左)、背景の7つの領域からなる画像である。ここで、腕(左)と腕(右)は、同じ色情報、明るさ情報を持ち、ズボン(左)とズボン(右)は、同じ色情報、明るさ情報を持っている。図13を実施例2で、実施する画像とする。まず、ステップS23の処理で、クラスタリング処理が行われて、顔がクラスタ2に、腕(右)がクラスタ3に、腕(左)がクラスタ4に、Tシャツがクラスタ5に、ズボン(右)がクラスタ6に、ズボン(左)がクラスタ7に、背景がクラスタ1に、クラスタリングされる。 FIG. 13 shows a humanoid object, which is an image composed of seven areas: face, arm (right), arm (left), T-shirt, pants (right), pants (left), and background. Here, the arm (left) and the arm (right) have the same color information and brightness information, and the pants (left) and pants (right) have the same color information and brightness information. FIG. 13 is an image to be executed in the second embodiment. First, in step S23, clustering is performed. The face is in cluster 2, the arm (right) is in cluster 3, the arm (left) is in cluster 4, the T-shirt is in cluster 5, and the pants (right). Are clustered in cluster 6, pants (left) in cluster 7, and background in cluster 1.
 次に、クラスタ結合部6により、ステップS27の処理が行われ、図15に示すように、6つのクラスタ2、クラスタ3、クラスタ4、クラスタ5、クラスタ6、クラスタ7は統合されて、クラスタ2に纏められる。ここで、ステップS27のクラスタを結合する処理は、色差などの物理量の差だけでクラスタを結合するのではなく、クラスタ間の接続関係や内包関係を利用して、クラスタを結合している。例えば、楕円形のものに、長方形状のものが接続し、その長方形状のものから、さらに細い長方形状のものが伸びていた場合、それらは、顔に胴体が接続し、胴体から手足が出ているという人体形状の特徴を表しているから、これらの領域を同一のものと判定し、結合する。このように、物体の特徴を満たすクラスタ間の関係を見つけたら、クラスタを結合していく。様々な物体の特徴は、予め、クラスタ結合用データ保存部7に保存しておき、そこから取得する。このようにしてクラスタを結合すると、各部分は異なる色や物理量で形成されていても同一の物体である場合、その物体の輪郭も抽出することが可能になる。図14のステップS24では、結合されたクラスタから輪郭抽出を行っている。図15の処理結果をみると、結合されたクラスタ2の境界部が、輪郭抽出ができていることがわかる。 Next, the processing of step S27 is performed by the cluster combining unit 6, and as shown in FIG. 15, the six clusters 2, cluster 3, cluster 4, cluster 5, cluster 6, and cluster 7 are integrated into cluster 2 It is summarized in. Here, in the process of combining clusters in step S27, the clusters are combined using not only the physical quantity difference such as the color difference but also the connection relationship or inclusion relationship between the clusters. For example, if an oblong object is connected to a rectangular object, and a thin rectangular object extends from the oblong object, the torso is connected to the face and the limbs protrude from the torso. Therefore, these regions are determined to be the same and combined. As described above, when the relationship between the clusters satisfying the feature of the object is found, the clusters are combined. The characteristics of various objects are stored in advance in the cluster connection data storage unit 7 and acquired from there. When the clusters are combined in this way, if the respective parts are the same object even if they are formed with different colors and physical quantities, the contour of the object can be extracted. In step S24 of FIG. 14, contour extraction is performed from the combined clusters. From the processing result shown in FIG. 15, it can be seen that the boundary of the combined cluster 2 has been subjected to contour extraction.
 次に、ステップS25で、輪郭認識を行う。ここでは、輪郭抽出された結果を輪郭データ保存部のデータベースと参照して、マッチング処理を行い、抽出された輪郭が、人間であるという認識結果を出力する。以上により、いくつかの色別のセグメントからなる物体の認識に関しても、本発明が有効であることがわかった。また、実施例2においても、実施例1と同様、光の加減にロバストな認識が可能である。 Next, in step S25, contour recognition is performed. Here, the contour extraction result is referred to the database of the contour data storage unit, matching processing is performed, and a recognition result that the extracted contour is a human is output. From the above, it has been found that the present invention is also effective for recognizing an object composed of several color segments. Further, in the second embodiment, similarly to the first embodiment, it is possible to recognize the robustness of the light.
 実施例1、2では、主に色情報を使用してクラスタリングを行ったが、その他の特徴量として、テクスチャなどを使用してもよい。また、実施例では、単純な画像で認識を行ったが、もっと複雑な形状の物体や、多くの物体が含まれている画像に対しても本発明は有効である。 In the first and second embodiments, clustering is mainly performed using color information, but a texture or the like may be used as another feature amount. In the embodiment, the recognition is performed using a simple image. However, the present invention is also effective for an object having a more complicated shape or an image including many objects.
 上記の実施形態の一部又は全部は、以下の付記のようにも記載されうるが、以下には限られない。 Some or all of the above embodiments can be described as in the following supplementary notes, but are not limited thereto.
(付記1)画像を画素の集合であるクラスタに分割する画像分割処理部を有し、
 前記画像分割処理部は、画像に含まれる各画素をそれぞれ着目画素として、前記着目画素に隣接する画素の中で、特徴量が前記着目画素に最も近い画素と同一のクラスタとすることにより、画像を分割することを特徴とする画像処理装置。
(Additional remark 1) It has an image division processing part which divides an image into a cluster which is a set of pixels,
The image division processing unit sets each pixel included in the image as a target pixel, and among the pixels adjacent to the target pixel, the feature amount is set to the same cluster as the pixel closest to the target pixel. An image processing apparatus characterized by dividing the image.
(付記2)前記特徴量は、色情報であることを特徴とする付記1に記載の画像処理装置。 (Supplementary note 2) The image processing apparatus according to supplementary note 1, wherein the feature amount is color information.
(付記3)前記画像のRGB表色系による成分を、R、G、Bとしたとき、
 前記色情報は、3つの成分が、2R-G-B、2G-R-B、2B-R-Gからなるベクトルにより表されることを特徴とする付記2に記載の画像処理装置。
(Additional remark 3) When the component by the RGB color system of the said image is set to R, G, B,
The image processing apparatus according to appendix 2, wherein the color information is represented by a vector including three components: 2R-GB, 2G-RB, and 2B-RG.
(付記4)前記画像のRGB表色系による成分を、R、G、Bとし、αをある係数としたとき、
 前記特徴量は、4つの成分が、2R-G-B、2G-R-B、2B-R-G、α(R+G+B)からなるベクトルにより表されることを特徴とする付記1に記載の画像処理装置。
(Supplementary Note 4) When the RGB color system components of the image are R, G, and B, and α is a certain coefficient,
The image according to claim 1, wherein the feature amount is represented by a vector including four components of 2R-GB, 2G-RB, 2B-RG, and α (R + G + B). Processing equipment.
(付記5)前記画像分割処理部で形成したクラスタの境界を抽出する輪郭抽出部をさらに有することを特徴とする付記1乃至4のいずれか1に記載の画像処理装置。 (Supplementary note 5) The image processing apparatus according to any one of supplementary notes 1 to 4, further comprising a contour extraction unit that extracts a boundary between clusters formed by the image division processing unit.
(付記6)前記輪郭抽出部による輪郭抽出結果に基づいて輪郭認識を行う輪郭認識部をさらに有し、前記輪郭認識部で認識できたクラスタを処理対象からはずした後、前記画像に含まれる残りの処理対象の各クラスタをそれぞれ着目クラスタとして、前記着目クラスタを前記着目クラスタに隣接するクラスタの中で、特徴量が前記クラスタに最も近いクラスタと同一のクラスタとすることにより、クラスタ間の結合をさらに行うことを特徴とする付記5に記載の画像処理装置。 (Additional remark 6) It further has a contour recognition part which performs a contour recognition based on the contour extraction result by the said contour extraction part, and after removing the cluster which could be recognized by the said contour recognition part from the process target, the remainder contained in the said image Each cluster to be processed is a target cluster, and the target cluster is the same cluster as the cluster closest to the cluster among the clusters adjacent to the target cluster, thereby combining the clusters. The image processing apparatus according to appendix 5, which is further performed.
(付記7)画像を画素の集合であるクラスタに分割する画像分割方法であって、
 前記画像に含まれる画素のうち一の画素を着目画素として前記着目画素と隣接する画素の中で、特徴量が前記着目画素に最も近い画素を選択する隣接画素選択ステップと、
 前記着目画素を前記隣接画素選択ステップで選択された画素と同一のクラスタとする隣接画素間同一クラスタ化ステップと、を含み、前記隣接画素選択ステップと、前記隣接画素間同一クラスタ化ステップと、を前記画像に含まれる各画素を前記着目画素として繰り返すことを特徴とする画像分割方法。
(Supplementary note 7) An image dividing method for dividing an image into clusters which are sets of pixels,
An adjacent pixel selection step of selecting a pixel having a feature quantity closest to the target pixel, among pixels adjacent to the target pixel, using one of the pixels included in the image as the target pixel;
Including the same clustering step between adjacent pixels in which the pixel of interest is the same cluster as the pixel selected in the adjacent pixel selection step, and the adjacent pixel selection step and the same clustering step between the adjacent pixels. An image dividing method, wherein each pixel included in the image is repeated as the pixel of interest.
(付記8)前記特徴量は、色情報であることを特徴とする付記7に記載の画像分割方法。 (Supplementary note 8) The image dividing method according to supplementary note 7, wherein the feature amount is color information.
(付記9)前記画像のRGB表色系による成分を、R、G、Bとしたとき、
 前記色情報は、3つの成分が、2R-G-B、2G-R-B、2B-R-Gからなるベクトルにより表されることを特徴とする付記8に記載の画像分割方法。
(Supplementary note 9) When R, G, and B are components in the RGB color system of the image,
9. The image segmentation method according to appendix 8, wherein the color information is represented by a vector including three components: 2R-GB, 2G-RB, and 2B-RG.
(付記10)前記画像のRGB表色系による成分を、R、G、Bとし、αをある係数としたとき、
 前記特徴量は、4つの成分が、2R-G-B、2G-R-B、2B-R-G、α(R+G+B)からなるベクトルにより表されることを特徴とする付記7に記載の画像分割方法。
(Supplementary Note 10) When the RGB color system components of the image are R, G, B, and α is a certain coefficient,
The image according to claim 7, wherein the feature amount is represented by a vector including four components of 2R-GB, 2G-RB, 2B-RG, and α (R + G + B). Split method.
(付記11)前記クラスタの境界を抽出するステップをさらに含むことを特徴とする付記7乃至10のいずれか1に記載の画像分割方法。 (Supplementary note 11) The image dividing method according to any one of supplementary notes 7 to 10, further comprising a step of extracting a boundary of the cluster.
(付記12)前記クラスタの境界を抽出するステップにより得られた輪郭抽出結果に基づいて、輪郭を認識する輪郭認識ステップと、前記輪郭認識ステップで認識できたクラスタを処理対象からはずすステップと、前記処理対象からはずされた残りの各クラスタをそれぞれ着目クラスタとして、前記着目クラスタを前記着目クラスタに隣接するクラスタの中で、特徴量が前記クラスタに最も近いクラスタを選択する隣接クラスタ選択ステップと、前記着目クラスタを前記隣接クラスタ選択ステップで選択されたクラスタと同一のクラスタとする隣接クラスタ間同一クラスタ化ステップと、をさらに含むことを特徴とする付記11に記載の画像分割方法。 (Supplementary note 12) A contour recognition step for recognizing a contour based on a contour extraction result obtained by the step of extracting a boundary of the cluster, a step of removing a cluster recognized in the contour recognition step from a processing target, Each of the remaining clusters removed from the processing target is set as a target cluster, and among the clusters adjacent to the target cluster, the target cluster is a neighboring cluster selection step of selecting a cluster whose feature is closest to the cluster; 12. The image segmentation method according to appendix 11, further comprising: an identical clustering step between adjacent clusters in which the cluster of interest is the same cluster as the cluster selected in the adjacent cluster selection step.
 本発明の画像分割方法により、デジタル映像機器で撮像された画像に映っている被写体を認識することができるので、撮像された画像データを被写体ごとに分類したり、画像データベースの中からある被写体のシーンを検索したりすることなどに適用することが可能である。 The image segmentation method of the present invention makes it possible to recognize a subject appearing in an image captured by a digital video device, so that the captured image data can be classified for each subject or a subject in the image database can be identified. It can be applied to searching for a scene.
 本発明の全開示(請求の範囲及び図面を含む)の枠内において、さらにその基本的技術思想に基づいて、実施例ないし実施例の変更・調整が可能である。また、本発明の請求の範囲の枠内において種々の開示要素の多様な組み合わせないし選択が可能である。すなわち、本発明は、請求の範囲及び図面を含む全開示、技術的思想にしたがって当業者であればなし得るであろう各種変形、修正を含むことは勿論である。 In the frame of the entire disclosure (including claims and drawings) of the present invention, the embodiment or the embodiment can be changed or adjusted based on the basic technical idea. Various combinations and selections of various disclosed elements are possible within the scope of the claims of the present invention. That is, the present invention of course includes various modifications and corrections that could be made by those skilled in the art according to the entire disclosure including the claims and the drawings, and the technical idea.
1   画像取得部
2   画像分割処理部
3   輪郭抽出部
4   輪郭認識部
5   輪郭データ保存部
6   クラスタ結合部
7   クラスタ結合用データ保存部
8、9 画像処理装置
DESCRIPTION OF SYMBOLS 1 Image acquisition part 2 Image division process part 3 Contour extraction part 4 Contour recognition part 5 Contour data storage part 6 Cluster connection part 7 Cluster connection data storage part 8, 9 Image processing apparatus

Claims (10)

  1.  画像を画素の集合であるクラスタに分割する画像分割処理部を有し、
     前記画像分割処理部は、画像に含まれる各画素をそれぞれ着目画素として、前記着目画素に隣接する画素の中で、特徴量が前記着目画素に最も近い画素と同一のクラスタとすることにより、画像を分割することを特徴とする画像処理装置。
    An image division processing unit that divides an image into clusters that are sets of pixels;
    The image division processing unit sets each pixel included in the image as a target pixel, and among the pixels adjacent to the target pixel, the feature amount is set to the same cluster as the pixel closest to the target pixel. An image processing apparatus characterized by dividing the image.
  2.  前記特徴量は、色情報であることを特徴とする請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the feature amount is color information.
  3.  前記画像のRGB表色系による成分を、R、G、Bとしたとき、
     前記色情報は、3つの成分が、2R-G-B、2G-R-B、2B-R-Gからなるベクトルにより表されることを特徴とする請求項2に記載の画像処理装置。
    When the components of the image according to the RGB color system are R, G, B,
    The image processing apparatus according to claim 2, wherein the color information is represented by a vector including three components of 2R-GB, 2G-RB, and 2B-RG.
  4.  前記画像のRGB表色系による成分を、R、G、Bとし、αをある係数としたとき、
     前記特徴量は、4つの成分が、2R-G-B、2G-R-B、2B-R-G、α(R+G+B)からなるベクトルにより表されることを特徴とする請求項1に記載の画像処理装置。
    When the RGB color system components of the image are R, G, B, and α is a certain coefficient,
    2. The feature quantity according to claim 1, wherein the feature amount is represented by a vector composed of 2R-GB, 2G-RB, 2B-RG, and α (R + G + B). Image processing device.
  5.  前記画像分割処理部で形成したクラスタの境界を抽出する輪郭抽出部をさらに有することを特徴とする請求項1乃至4のいずれか1項に記載の画像処理装置。 5. The image processing apparatus according to claim 1, further comprising a contour extracting unit that extracts a boundary between clusters formed by the image division processing unit.
  6.  画像を画素の集合であるクラスタに分割する画像分割方法であって、
     前記画像に含まれる画素のうち一の画素を着目画素として前記着目画素と隣接する画素の中で、特徴量が前記着目画素に最も近い画素を選択する隣接画素選択ステップと、
     前記着目画素を前記隣接画素選択ステップで選択された画素と同一のクラスタとする隣接画素間同一クラスタ化ステップと、
     を含み、前記隣接画素選択ステップと、前記隣接画素間同一クラスタ化ステップと、を前記画像に含まれる各画素を前記着目画素として繰り返すことを特徴とする画像分割方法。
    An image dividing method for dividing an image into clusters that are a set of pixels,
    An adjacent pixel selection step of selecting a pixel having a feature quantity closest to the target pixel, among pixels adjacent to the target pixel, using one of the pixels included in the image as the target pixel;
    The same clustering step between adjacent pixels in which the pixel of interest is the same cluster as the pixel selected in the adjacent pixel selection step;
    An image dividing method comprising: repeating the adjacent pixel selecting step and the same clustering step between adjacent pixels as each pixel of interest as the pixel of interest.
  7.  前記特徴量は、色情報であることを特徴とする請求項6に記載の画像分割方法。 The image dividing method according to claim 6, wherein the feature amount is color information.
  8.  前記画像のRGB表色系による成分を、R、G、Bとしたとき、
     前記色情報は、3つの成分が、2R-G-B、2G-R-B、2B-R-Gからなるベクトルにより表されることを特徴とする請求項7に記載の画像分割方法。
    When the components of the image according to the RGB color system are R, G, B,
    8. The image dividing method according to claim 7, wherein the color information is represented by a vector including three components of 2R-GB, 2G-RB, and 2B-RG.
  9.  前記画像のRGB表色系による成分を、R、G、Bとし、αをある係数としたとき、
     前記特徴量は、4つの成分が、2R-G-B、2G-R-B、2B-R-G、α(R+G+B)からなるベクトルにより表されることを特徴とする請求項6に記載の画像分割方法。
    When the RGB color system components of the image are R, G, B, and α is a certain coefficient,
    7. The feature amount according to claim 6, wherein the feature amount is represented by a vector composed of 2R-GB, 2G-RB, 2B-RG, and α (R + G + B). Image segmentation method.
  10.  前記クラスタの境界を抽出するステップをさらに含むことを特徴とする請求項6乃至9のいずれか1項に記載の画像分割方法。 10. The image segmentation method according to claim 6, further comprising a step of extracting a boundary of the cluster.
PCT/JP2011/065356 2010-07-05 2011-07-05 Image processing device and image segmenting method WO2012005242A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2012523873A JPWO2012005242A1 (en) 2010-07-05 2011-07-05 Image processing apparatus and image dividing method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010-152869 2010-07-05
JP2010152869 2010-07-05

Publications (1)

Publication Number Publication Date
WO2012005242A1 true WO2012005242A1 (en) 2012-01-12

Family

ID=45441221

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/065356 WO2012005242A1 (en) 2010-07-05 2011-07-05 Image processing device and image segmenting method

Country Status (2)

Country Link
JP (1) JPWO2012005242A1 (en)
WO (1) WO2012005242A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022519355A (en) * 2019-06-24 2022-03-23 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 How to display resources, appliances, equipment and computer programs

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07160879A (en) * 1993-12-07 1995-06-23 Toppan Printing Co Ltd Picture processing method
JPH08167028A (en) * 1994-12-13 1996-06-25 Toppan Printing Co Ltd Image processing method
JP2001022925A (en) * 1999-07-09 2001-01-26 Mitsubishi Chemicals Corp Method and device for image processing based on artificial life method and computer readable recording medium with image processing program recorded therein
JP2001061160A (en) * 1999-08-24 2001-03-06 Matsushita Electric Ind Co Ltd Color correction device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07160879A (en) * 1993-12-07 1995-06-23 Toppan Printing Co Ltd Picture processing method
JPH08167028A (en) * 1994-12-13 1996-06-25 Toppan Printing Co Ltd Image processing method
JP2001022925A (en) * 1999-07-09 2001-01-26 Mitsubishi Chemicals Corp Method and device for image processing based on artificial life method and computer readable recording medium with image processing program recorded therein
JP2001061160A (en) * 1999-08-24 2001-03-06 Matsushita Electric Ind Co Ltd Color correction device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022519355A (en) * 2019-06-24 2022-03-23 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 How to display resources, appliances, equipment and computer programs
JP7210089B2 (en) 2019-06-24 2023-01-23 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 RESOURCE DISPLAY METHOD, APPARATUS, DEVICE AND COMPUTER PROGRAM

Also Published As

Publication number Publication date
JPWO2012005242A1 (en) 2013-09-02

Similar Documents

Publication Publication Date Title
CN109344701B (en) Kinect-based dynamic gesture recognition method
CN107578418B (en) Indoor scene contour detection method fusing color and depth information
Lu et al. Salient object detection using concavity context
Huang et al. Context-aware single image rain removal
Nishiyama et al. Aesthetic quality classification of photographs based on color harmony
Ilea et al. Image segmentation based on the integration of colour–texture descriptors—A review
CN108537239B (en) Method for detecting image saliency target
Almogdady et al. A flower recognition system based on image processing and neural networks
Mehrani et al. Saliency Segmentation based on Learning and Graph Cut Refinement.
KR20160143494A (en) Saliency information acquisition apparatus and saliency information acquisition method
CN111125416A (en) Image retrieval method based on multi-feature fusion
Wang et al. Recognition and localization of occluded apples using K-means clustering algorithm and convex hull theory: a comparison
CN107944403B (en) Method and device for detecting pedestrian attribute in image
CN106874942B (en) Regular expression semantic-based target model rapid construction method
JP7350208B2 (en) Image processing device, image processing method, and program
Badeka et al. Grapes visual segmentation for harvesting robots using local texture descriptors
İmamoğlu et al. Salient object detection on hyperspectral images using features learned from unsupervised segmentation task
Ticay-Rivas et al. Pollen classification based on geometrical, descriptors and colour features using decorrelation stretching method
Kuzovkin et al. Descriptor-based image colorization and regularization
CN116681636B (en) Light infrared and visible light image fusion method based on convolutional neural network
Mahantesh et al. An impact of complex hybrid color space in image segmentation
WO2012005242A1 (en) Image processing device and image segmenting method
Huang et al. M2-Net: multi-stages specular highlight detection and removal in multi-scenes
Shen et al. A holistic image segmentation framework for cloud detection and extraction
Tamilselvi et al. Color based K-Means Clustering For Image Segmentation to Identify the Infected Leaves

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11803576

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2012523873

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11803576

Country of ref document: EP

Kind code of ref document: A1