WO2012053811A2 - Système et procédé de regroupement de couleurs basés sur un vote tensoriel - Google Patents

Système et procédé de regroupement de couleurs basés sur un vote tensoriel Download PDF

Info

Publication number
WO2012053811A2
WO2012053811A2 PCT/KR2011/007765 KR2011007765W WO2012053811A2 WO 2012053811 A2 WO2012053811 A2 WO 2012053811A2 KR 2011007765 W KR2011007765 W KR 2011007765W WO 2012053811 A2 WO2012053811 A2 WO 2012053811A2
Authority
WO
WIPO (PCT)
Prior art keywords
color
tensor
voting
clustering
unit
Prior art date
Application number
PCT/KR2011/007765
Other languages
English (en)
Korean (ko)
Other versions
WO2012053811A9 (fr
WO2012053811A3 (fr
Inventor
이귀상
Original Assignee
전남대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 전남대학교산학협력단 filed Critical 전남대학교산학협력단
Publication of WO2012053811A2 publication Critical patent/WO2012053811A2/fr
Publication of WO2012053811A3 publication Critical patent/WO2012053811A3/fr
Publication of WO2012053811A9 publication Critical patent/WO2012053811A9/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Definitions

  • the present invention relates to color clustering based on tensor voting.
  • the present invention relates to color clustering based on tensor voting.
  • the present invention relates to color clustering of an input image.
  • the present invention relates to a color clustering system and a method based on tensor voting for promoting effective image segmentation by clustering.
  • Color clustering methods are widely used for image segmentation and include mapping of pixels to point to multidimensional color feature spaces such as RGB or HSV.
  • a 3D face reconstruction technique using 2D images such as a photograph of a face has been disclosed as '3D face reconstruction from 2D images'.
  • Prior face knowledge or generic face is used to detect sparse 3D information from images and to identify image pairs.
  • Bundle adjustment is performed to determine more accurate 3D camera position, image pairs are modified, and high density 3D facial information is detected without using prior facial knowledge.
  • Outliers are removed using, for example, tensor voting.
  • the 3D plane is detected from high density 3D information, and the plane detail information is detected from the images.
  • Another conventional technique of 'a face region extraction method using adaptive color clustering' includes an image input; Irradiating pixels satisfying the skin color condition in the input image; Grouping similar colors having a range of color group sizes in the color space for pixels satisfying the flesh color condition; Designating a face candidate area with pixels belonging to a group color; Identifying or adjusting a face area using a non-color feature in a designated face candidate area; Adjusting the color group size and storing the extracted face region information; It includes.
  • the step of identifying the face region using the non-color feature is performed through elliptic shape matching, and the face candidate region composed of pixels belonging to one group color belongs to the corresponding group color in the ellipse.
  • the ratio of the pixels belonging to the group color in the ellipse is low, and the ratio of the pixels belonging to the group color outside the ellipse through elliptic shape matching. In the lower case, the color group size was increased.
  • Another conventional technique of 'image processing method, apparatus and program' arranges a patch for embedding in the order from the pixel having high priority to embed in the repaired pixel in the repair area, and the pixel value of the image in the patch for embedding
  • the priority calculation patch placed in each pixel on the ointment of the repair area which is larger in size than the purchase patch by replacing the repair area by replacing the image with a known area.
  • the boundary line extending from the contour of the priority calculating patch on the known area side to the edge is detected, and the priority of the pixel on which the priority calculating patch on which the boundary line is detected is arranged is placed. It was characterized by setting higher than the pixel in which the priority calculation patch which a boundary line is not detected is arrange
  • the prior arts used the K-means method, fuzzy K-means method, or The mean shift method for color clustering.
  • the K-means method is the simplest and most widely known method for clustering. This method defines a prototype by centroid, which is usually the average of a group of points, and the main input parameter of clustering is the number of clusters, or K.
  • the fuzzy K-means method is an extension of the K-means method.
  • each data point or feature vector has only one cluster, while the fuzzy K-means method has one or more clusters with different membership degrees and fuzzy boundaries between 0 and 1 in one or more clusters. Allow each data point to include.
  • the mean shift method is a widely used method for color clustering. This method performs an iterative procedure of shifting each data point to an average of neighboring data points.
  • the initial position of the centroid is randomly generated. This may result in poor quality, and therefore their methods may not produce the same results for each run.
  • the mean shift method also has a problem.
  • the mean shift method does not require a fixed value of the cluster like the K-means method and the fuzzy K-means method, but is sensitive at local peaks. Moreover, the results of the cluster are highly dependent on the bandwidth h of the kernel density estimator.
  • the present invention for solving the problems according to the prior art, the color clustering system based on the tensor voting based on the improved effect through the color space analysis and segmentation by performing the tensor voting by creating a color feature space, and its In providing a method.
  • the color feature space generation unit for generating a color cluster by separating the color in the image from the input image;
  • a tensor voting unit configured to tensor the color clusters generated by the color feature space generation unit;
  • a color space analysis and segmentation unit for analyzing and segmenting each color cluster tensored by the tensor voting unit. Characterized in that it comprises a.
  • the color feature space generation unit includes an image conversion module for converting an image input by the user into a grayscale image for color clustering; And an edge detection module for distinguishing a background and an object from an image converted from the image conversion module by using an edge detection method. Characterized in that it comprises a.
  • the tensor voting unit may include: an encoding module for encoding a color feature vector included in each color cluster generated by the color feature space generator into a second order tensor; A tensor shape classification module for dividing the tensor generated through the encoding module into a ball shape or a stick shape; A tensor structure readjustment module that readjusts the structure of each tensor whose shape is separated from the tensor shape discrimination module; And a center point detection module detecting a center point of the tensor adjusted by the tensor restructuring module. Characterized in that it comprises a.
  • the color space analysis and segmentation unit may include: a color cluster analysis module configured to analyze a structure of each color cluster using a center point detected from the center point detection module and measure the number of dominant colors; A segmentation module configured to identify the local maximum value from the number of dominant colors analyzed from the color cluster analysis module, and to remove the fake maximum value for effective color clustering; Characterized in that it comprises a.
  • a color clustering method based on tensor voting color clustering based on tensor voting using a color clustering system based on tensor voting including a color feature space generation unit, a tensor voting unit, and a color space analysis and segmentation unit
  • the method may further include generating a color feature space by generating a color cluster by dividing colors in the image from the input image; A tensor voting process in which the tensor voting unit tensors each color cluster generated through the color feature space generation process; And a color space analysis and segmentation process in which the color space analysis and segmentation unit analyzes and refines each of the tensor voted color clusters. Characterized in that it comprises a.
  • the color feature space generation process may include: an image conversion step of converting, by the color feature space generation unit, an image input by a user into a grayscale image for color clustering; And an edge detection step of performing edge detection by the color feature space generation unit to distinguish an object from a background with the image converted from the image conversion step. Characterized in that it comprises a.
  • the edge detection step considers only the pixels of the homogeneous area in order to prevent an error due to the boundary pixel, but the non-boundary pixel is regarded as the pixel having the minimum edge size value.
  • the first edge size of the edge detection method for this Is calculated by the following equation.
  • the tensor voting process may further include: encoding, by the tensor voting unit, a color feature vector of each color cluster generated by the color feature space generator into a second order tensor; A tensor shape discriminating step of dividing the tensor generated by the tensor voting unit into the ball shape or the stick shape; The tensor voting unit clearly reconstructs the structure of each tensor whose shape is divided through the tensor shape discrimination step by accumulating a cast boat including size and direction information from each other by neighboring tokens by the scale ⁇ of each voting field.
  • the tensor shape classification step may be characterized in that the tensor T is divided into tensor shapes by the following equation.
  • the following equation may be used to calculate the strength of the boat received from the neighbor token.
  • the color space analysis and segmentation process is a color cluster analysis step of analyzing the structure of the color cluster from the center and the color space analysis and segmentation unit and grasp the number of the dominant color; Determining, by the color space analysis and segmentation unit, a local maximum value from the number of dominant colors analyzed from the color cluster analysis step; And removing, by the color space analysis and segmentation unit, the fake maximum value using the identified local maximum value. Characterized in that it comprises a.
  • the present invention has an effect of improving the image segmentation of the object compared to the prior art in performing color clustering.
  • the present invention unlike the prior art, because the number of the main colors can be measured automatically, so that each of the performance results are the same and stable effect.
  • FIG. 1 is an overall configuration diagram of a color clustering system based on tensor voting according to the present invention.
  • 5 is a view comparing the present invention with the mean shift method.
  • 6 is a view of performing the K-means method.
  • FIG. 7 shows the results of the present invention for different voting ranges [sigma].
  • FIG. 8 is a general flow diagram for a method of color clustering based on tensor voting according to the present invention.
  • FIG. 11 is a detailed flowchart of a color space analysis and segmentation process according to the present invention.
  • a color clustering system based on tensor voting according to an embodiment of the present invention will be described with reference to FIG. 1.
  • FIG. 1 is a schematic diagram conceptually illustrating a color clustering system 100 based on tensor voting according to a characteristic aspect of the present invention.
  • the color clustering system 100 generates a color feature space generation unit 110 and a color feature space generation unit 110 for generating respective color clusters by dividing colors in an image from an input image. And a tensor voting unit 120 for tensor voting each color cluster generated through the color cluster, and a color space analysis and segmentation unit 130 for analyzing and subdividing each of the tensor botted color clusters.
  • the image conversion module 111 converts an image input by a user into a grayscale image, and from the image conversion module 111.
  • An edge detection module 112 is used to distinguish an object from a background by using an edge detection method on the converted image.
  • the image conversion module 111 will be described below.
  • An image with multiple colors begins to generate color feature space by inserting the image into multidimensional color feature spaces such as RGB and HSV.
  • This grayscale image conversion is for performing tensor voting.
  • the two color components a * and b * are used as color feature spaces to reduce the effects of uneven lighting.
  • edge detection module 112 will be described below.
  • edge detection is started.
  • the color of the object's bounding pixels is not the actual color, if you apply all the values of the pixel, you may get unexpected results such as mixing the object with the background or subdividing the object into small pieces.
  • one of the edge detection methods Is calculated as the grayscale image I of the input image by Equation 1 below.
  • Non-boundry pixels are considered pixels with the smallest edge size values within 3x3 Windows.
  • FIG. 2 illustrates a * and b * color distributions of an original input image and a sample image. Specifically, FIG. 2 (a) shows an original image, FIG. 2 (b) shows a sample image, and FIG. 2 (c) shows FIG. 2. The color distribution of (a) and FIG. 2 (d) show the color distribution of FIG. 2 (b).
  • the color feature space generation detects an edge by performing grayscale image conversion to color cluster an image input by a user, and then generates an image candidate region, that is, a color feature space.
  • the color feature vector of each color cluster generated by the color feature space generation unit 110 is encoded into a second order tensor.
  • Tensor voting is a framework for grouping perceptual structures by boat casting of tokens in an image.
  • tensor voting is an integrated framework developed by the Computer Vision Group of the University of Southern Califonia (USC), which allows surfaces, borders, or curves in 2D, 3D, or even N-dimensional, faint, noisy binary data. It is also used to generate an indication of the intersection of curves.
  • USC Computer Vision Group of the University of Southern Califonia
  • Such tensor voting is an integrated inference system that can handle various problems in the field of computer vision, which consists of voting methods for communication between each token and expressing input data, that is, tokens.
  • Tensor is a geometric quantity that extends the concept of vector and can be expressed in the form of an ellipse or ellipsoid. These tensors represent the characteristics of input data (tokens) and can be represented by sophisticated tensors through voting.
  • Vote can be defined as one direction component by a tangential vector or normal vector.
  • the encoding module 121 will be described as follows.
  • the color feature vector in each color cluster is encoded with a second order tensor.
  • the tensor shape classification module 122 will be described as follows.
  • the ellipse shape of the encoded tensor indicates the type of representative structure of the cluster, and the size of the ellipse indicates the certainty of this information.
  • the ball tensor represents an isolated point and the stick tensor represents the token to which the curve portion belongs.
  • Equation 2 The general tensor T can be decomposed into a stick and ball configuration by Equation 2 as follows.
  • the tensor restructuring module 123 will be described below.
  • the tensor has a shape and size initialized from tensor shape discrimination module 122 and gradually deforms due to the accumulation of cast boats from other neighboring tokens within a specific voting field ⁇ .
  • the scale of the voting field, ⁇ controls the size of the voting neighborhood and the strength of the boat.
  • Reciprocal boats are returned with their size and direction information.
  • Equation 3 the strength of the received boat is calculated by the decay function in Equation 3 as follows.
  • the boat direction received is the normal of the gentle curve between the voter and the recipient.
  • the shape of the tensor is further refined, such that the resulting tensor can be used to analyze the color feature space and perform color clustering.
  • center point detection module 124 will be described below.
  • the center point detection module 124 has a cluster whose shape is clear due to the tensor restructuring module 123, and detects the center point using an edge detection method.
  • a plurality of color clusters having undergone the color feature space generation have a color cluster from which a centroid is extracted through the tensor voting, and then enters the next process, the color space analysis and segmentation unit 130. do.
  • color space analysis and segmentation unit 130 when the color space analysis and segmentation unit 130 is examined, the structure of each color cluster is analyzed using the center points detected from the center point detection module 124 and the number of dominent colors is determined.
  • Color cluster analysis module 131 for measuring the local maxima from the number of dominant colors analyzed from the cluster analysis module 131, and after removing the fake maximum value for effective color clustering Segmentation module 132.
  • the color cluster analysis module 131 will be described below.
  • the resulting tensor located at the center point is larger in conveying more information than a tensor that is not.
  • the shape of the token at the center point represents the general shape of the color cluster.
  • the shape and size of the resulting tensor can be used to analyze the data clusters.
  • FIG. 3 (a) simplified data of the color feature space is shown in FIG. 3 (a).
  • the token is encoded by the second order tensor.
  • This voting process is performed by casting information between tensors.
  • the stick component of the tensor corresponding to the center point of the long cluster is high.
  • the tensor normal vector corresponding to the center point Indicates the normal direction of the cluster.
  • segmentation module 132 will be described as follows.
  • the number of dominant colors analyzed from the color cluster analysis module 131 is It is equal to the number of local maxima in the map.
  • M the first edge size of the edge detection method
  • the inspection range can be changed to adjust the result.
  • the smaller the check range the larger the maximum number of values that can be extracted.
  • the check range can be set to a voting range, i.e., ⁇ , in the experiment.
  • each cluster can be represented by the Gaussian distributions whose mean is located at the center point.
  • the standard deviation of each axis is proportional to the eigenvalue of each tensor at the center point.
  • FIG. 3 (d) shows the Gaussian distribution for the three clusters of FIG. 3 (a).
  • the Gaussian Bayes Classifier for cluster k for the feature vector x in the feature space is as follows.
  • 3 (e) shows the clustered results obtained by using the distance from the center point and the shape of the cluster.
  • the number of clusters, ie, K is estimated by 10-fold cross validation.
  • Background elements are components that have many pixels touching the image boundaries.
  • Noise components usually include scattered pixels or small width strokes.
  • Residual text segmentation is used to produce the final binary text image.
  • Figure 5 shows the mean shift method and part of the image produced by the present invention.
  • the present invention is compared with the mean shift method for performing image color segmentation in terms of segmentation quality.
  • the bandwidth of the h parameter is adjusted in the mean shift method so that the number of clusters is similar to the number of dominant colors estimated by the present invention.
  • the mean shift method also applies color components a * and b *.
  • the present invention clearly separates the objects and produces better results than the mean shift method.
  • the present invention can automatically estimate the number of dominant colors.
  • Fig. 7 shows that the present invention is not sensitive to the voting range sigma, and similar results can be achieved by changing the value of the voting range sigma as shown.
  • FIG. 8 is a flowchart illustrating a color clustering method based on tensor voting according to a characteristic aspect of the present invention, wherein the color feature space generator 110 generates color clusters by dividing colors in an image from an input image.
  • Tensor voting process (S200), the tensor voting process (S200) for tensor botting each color cluster generated through the color feature generation process (S100), and color space analysis and segmentation Addition is illustrated, including color space analysis and segmentation process (S300) for analyzing and segmenting each of the tensor voted color clusters.
  • the image conversion module 111 converts an input image into a grayscale image to perform color clustering (S110), and an edge detection module.
  • edge detection is performed to distinguish an object from a background using the image converted in the image converting operation S110.
  • FIG. 10 is a detailed flowchart of the tensor voting process S200.
  • the encoding module 121 performs the second order tensor on the color feature vector of each color cluster generated by the color feature space generator 110.
  • FIG. The tensor shape classification module 122 divides the tensor generated by the encoding module 121 into a ball shape or a stick shape (S220), and the tensor structure readjustment module 123 distinguishes the shape.
  • the tensor structure is adjusted again (S230), and the center point detection module 124 detects the center point through the tensors generated by the tensor structure readjustment module 123 (S240).
  • step S300 is a flow chart of the color space analysis and segmentation process (S300).
  • the color cluster analysis module 131 analyzes the structure of each color cluster detected from the center point detection module 124 and controls the dominant color. Measure the number of (S310), the segmentation module 132 grasps the local maximum value with the number of dominant colors analyzed from the cluster analysis module 131 (S320), and removes the fake maximum value effectively In step S330, color clustering is performed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention se rapporte à un regroupement de couleurs basé sur un vote tensoriel et plus particulièrement, à un système et à un procédé de regroupement de couleurs basés sur un vote tensoriel qui impliquent une estimation automatique du nombre de couleurs dominantes dans le vote tensoriel au cours de l'exécution du regroupement de couleurs d'une image d'entrée et qui exécutent un regroupement en utilisant la densité de données d'une couleur, en obtenant de ce fait une segmentation d'image efficace. Pour obtenir les caractéristiques techniques décrites ci-dessus, il est fourni un système de regroupement de couleurs basé sur un vote tensoriel qui comprend : une unité de génération d'espace de caractéristique de couleurs destinée à classer les couleurs dans une image d'entrée de façon à générer des groupes de couleurs pour chaque couleur ; une unité de vote tensoriel destinée à distribuer un vote tensoriel pour chaque groupe de couleurs généré par l'unité de génération d'espace de caractéristique de couleurs ; et une unité de segmentation et d'analyse d'espace de couleurs destinée à segmenter et à analyser chaque groupe de couleurs pour lequel un vote tensoriel est distribué par l'unité de vote tensoriel. En outre, pour obtenir lesdites caractéristiques techniques, il est fourni un procédé de regroupement de couleurs basé sur un vote tensoriel qui utilise le système de regroupement de couleurs basé sur un vote tensoriel qui comprend : l'unité de génération d'espace de caractéristique de couleurs, l'unité de vote tensoriel et l'unité de segmentation et d'analyse d'espace de couleurs, le procédé comprenant : une étape de génération d'espace de caractéristique de couleurs dans laquelle l'unité de génération d'espace de caractéristique de couleurs classe les couleurs dans une image d'entrée de façon à générer des groupes de couleurs pour chaque couleur ; une étape de vote tensoriel dans laquelle l'unité de vote tensoriel distribue un vote tensoriel pour chaque groupe de couleurs généré par l'intermédiaire de l'étape de génération d'espace de caractéristique de couleurs ; et une étape de segmentation et d'analyse d'espace de couleurs dans laquelle l'unité de segmentation et d'analyse d'espace de couleurs analyse et segmente chacun des groupes de couleurs pour lesquels un vote tensoriel a été distribué. Le système et le procédé de la présente invention qui présentent les objectifs et les fonctions décrits ci-dessus permettent d'améliorer la segmentation d'image d'un objet par rapport aux techniques conventionnelles en exécutant un regroupement de couleurs. En outre, le système et le procédé de la présente invention permettent d'estimer de manière automatique le nombre de couleurs primaires à la différence des techniques conventionnelles, en rendant de ce fait uniforme et stable le résultat de chaque regroupement de couleurs.
PCT/KR2011/007765 2010-10-18 2011-10-18 Système et procédé de regroupement de couleurs basés sur un vote tensoriel WO2012053811A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020100101495A KR101151739B1 (ko) 2010-10-18 2010-10-18 텐서 보팅에 기반을 둔 컬러 클러스터링 시스템 및 그 방법
KR10-2010-0101495 2010-10-18

Publications (3)

Publication Number Publication Date
WO2012053811A2 true WO2012053811A2 (fr) 2012-04-26
WO2012053811A3 WO2012053811A3 (fr) 2012-06-14
WO2012053811A9 WO2012053811A9 (fr) 2012-08-02

Family

ID=45975721

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2011/007765 WO2012053811A2 (fr) 2010-10-18 2011-10-18 Système et procédé de regroupement de couleurs basés sur un vote tensoriel

Country Status (2)

Country Link
KR (1) KR101151739B1 (fr)
WO (1) WO2012053811A2 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021067665A3 (fr) * 2019-10-03 2021-05-14 Photon-X, Inc. Amélioration de routines d'intelligence artificielle à l'aide de données 3d
CN113591981A (zh) * 2021-07-30 2021-11-02 上海建工四建集团有限公司 基于人工智能的既有水磨石信息查勘方法及系统
CN113643399A (zh) * 2021-08-17 2021-11-12 河北工业大学 基于张量链秩的彩色图像自适应重建方法
US12032278B2 (en) 2021-08-06 2024-07-09 Photon-X, Inc. Integrated spatial phase imaging

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476831B (zh) * 2020-03-20 2023-07-18 清华大学 基于聚类分析的pcb图像色彩迁移装置及方法

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001039498A2 (fr) * 1999-11-24 2001-05-31 Koninklijke Philips Electronics N.V. Procede et appareil permettant de detecter des objets en mouvement dans le domaine de la video conference et dans d'autres applications

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001039498A2 (fr) * 1999-11-24 2001-05-31 Koninklijke Philips Electronics N.V. Procede et appareil permettant de detecter des objets en mouvement dans le domaine de la video conference et dans d'autres applications

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LEE, GUEE SANG ET AL.: 'Extraction of Text Alignment by Tensor Voting and its Application to Text Detection' KIISE vol. 36, no. 11, November 2009, pages 912 - 919 *
TOAN NGUYEN ET AL.: 'Determination of Initial Parameters for K-means Clustering by Tensor Voting' KIISE vol. 36, no. 2(A), November 2009, pages 96 - 97 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021067665A3 (fr) * 2019-10-03 2021-05-14 Photon-X, Inc. Amélioration de routines d'intelligence artificielle à l'aide de données 3d
CN113591981A (zh) * 2021-07-30 2021-11-02 上海建工四建集团有限公司 基于人工智能的既有水磨石信息查勘方法及系统
CN113591981B (zh) * 2021-07-30 2024-02-09 上海建工四建集团有限公司 基于人工智能的既有水磨石信息查勘方法及系统
US12032278B2 (en) 2021-08-06 2024-07-09 Photon-X, Inc. Integrated spatial phase imaging
CN113643399A (zh) * 2021-08-17 2021-11-12 河北工业大学 基于张量链秩的彩色图像自适应重建方法
CN113643399B (zh) * 2021-08-17 2023-06-16 河北工业大学 基于张量链秩的彩色图像自适应重建方法

Also Published As

Publication number Publication date
KR101151739B1 (ko) 2012-06-15
KR20120040004A (ko) 2012-04-26
WO2012053811A9 (fr) 2012-08-02
WO2012053811A3 (fr) 2012-06-14

Similar Documents

Publication Publication Date Title
WO2016070462A1 (fr) Procédé de détection de défauts de panneau d'affichage à base d'histogramme de gradient orienté
WO2014058248A1 (fr) Appareil de contrôle d'images pour estimer la pente d'un singleton, et procédé à cet effet
US8144932B2 (en) Image processing apparatus, image processing method, and interface apparatus
WO2016163755A1 (fr) Procédé et appareil de reconnaissance faciale basée sur une mesure de la qualité
CN115082419A (zh) 一种吹塑箱包生产缺陷检测方法
JP2000003452A (ja) デジタル画像における顔面の検出方法、顔面検出装置、画像判定方法、画像判定装置およびコンピュ―タ可読な記録媒体
JP2000132688A (ja) 顔パーツ検出方法及びその装置
WO2020248515A1 (fr) Procédé de détection et de reconnaissance de piéton et de véhicule combinant une différence inter-trames et un classificateur bayes
CN106909884B (zh) 一种基于分层结构和可变形部分模型的手部区域检测方法和装置
WO2012053811A2 (fr) Système et procédé de regroupement de couleurs basés sur un vote tensoriel
Chen et al. Detection of human faces in colour images
Wah et al. Analysis on feature extraction and classification of rice kernels for Myanmar rice using image processing techniques
US7715632B2 (en) Apparatus and method for recognizing an image
CN110348307B (zh) 一种起重机金属结构攀爬机器人的路径边缘识别方法及系统
JP3499305B2 (ja) 顔領域抽出方法及び露光量決定方法
US7403636B2 (en) Method and apparatus for processing an image
CN1237485C (zh) 利用快速人脸检测对新闻被采访者进行脸部遮挡的方法
JP3576654B2 (ja) 露光量決定方法、図形抽出方法及び顔領域判断方法
WO2015076433A1 (fr) Procédé d'analyse d'images faciales utilisant un micro-diagramme local
Ravi et al. Face detection with facial features and gender classification based on support vector machine
Youlian et al. Face detection method using template feature and skin color feature in rgb color space
Gui et al. A fast caption detection method for low quality video images
CN108604300B (zh) 从具有非均匀背景内容的电子扫描图像中提取文档页面图像
KR101711328B1 (ko) 영상 촬영 장치를 통하여 획득한 영상 이미지에서 신장 대비 머리 높이의 비를 이용하여 어린이와 성인을 구분하는 방법
Kamal et al. Human detection based on HOG-LBP for monitoring sterile zone

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11834611

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11834611

Country of ref document: EP

Kind code of ref document: A2