CN110163869A - A kind of image repeat element dividing method, smart machine and storage medium - Google Patents

A kind of image repeat element dividing method, smart machine and storage medium Download PDF

Info

Publication number
CN110163869A
CN110163869A CN201910313823.2A CN201910313823A CN110163869A CN 110163869 A CN110163869 A CN 110163869A CN 201910313823 A CN201910313823 A CN 201910313823A CN 110163869 A CN110163869 A CN 110163869A
Authority
CN
China
Prior art keywords
pixel
super
image
distance
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910313823.2A
Other languages
Chinese (zh)
Other versions
CN110163869B (en
Inventor
袁剑虹
徐鹏飞
黄惠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201910313823.2A priority Critical patent/CN110163869B/en
Publication of CN110163869A publication Critical patent/CN110163869A/en
Application granted granted Critical
Publication of CN110163869B publication Critical patent/CN110163869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种图像重复元素分割方法、智能设备及存储介质,所述方法包括:读入图像,在相邻像素之间形成具有相似纹理,颜色以及亮度的像素块;图像形成超像素后,生成前景区域,在聚类的过程中,根据涵盖了超像素的颜色距离,空间距离以及超像素到用户输入的路径中心的距离来进行超像素合并,形成连通区域;将聚类结果自上而下扩散到每个超像素上,为每个聚类结果分配不同的权重信息,得到每个超像素的匹配误差即前景概率;将前景超像素颜色信息作为数据输入,以超像素之间的颜色相似度作为相似矩阵,通过不断迭代的方式筛选出所有样本中能够成为代表样本的数据,通过简单的交互快速分割出图像中的重复元素。

The invention discloses a method for segmenting repeated elements of an image, an intelligent device and a storage medium. The method includes: reading an image, forming pixel blocks with similar texture, color and brightness between adjacent pixels; , to generate the foreground area. In the process of clustering, the superpixels are merged according to the color distance, spatial distance and the distance from the superpixel to the center of the path input by the user to form a connected area; the clustering results are from the top The bottom is diffused to each superpixel, and different weight information is assigned to each clustering result, and the matching error of each superpixel is obtained, that is, the foreground probability; the foreground superpixel color information is used as data input, and the difference between superpixels As a similarity matrix, color similarity is used to filter out the data that can become representative samples in all samples through continuous iteration, and quickly segment the repeated elements in the image through simple interaction.

Description

一种图像重复元素分割方法、智能设备及存储介质A kind of image repeating element segmentation method, intelligent device and storage medium

技术领域technical field

本发明涉及图像分割处理技术领域,尤其涉及一种图像重复元素分割方法、智能设备及存储介质。The invention relates to the technical field of image segmentation processing, and in particular, to a method for segmenting repeated elements of an image, an intelligent device and a storage medium.

背景技术Background technique

图像分割是图像理解和计算机视觉领域的重要内容,应用的范围遍及医学、航空航天、图形学、机器人等领域,在理论研究和现实应用中得到了人们的高度重视。图像分割是图像从处理到分析的重要环节,分割结果的优劣直接影响着更进一步的信息处理。因此图像分割在整个图像处理当中,承担着基石的作用,对下一步的特征提取等有着重要的影响。而重复元素又是在本发明生活中扮演者不可或缺的角色,从图像中自动分离出重复元素具有实际的意义,能够减轻重复的搜索工作,把所有同类的重复元素当成一个整体来看待处理,直接把重复元素分离出来。Image segmentation is an important content in the field of image understanding and computer vision, and its applications are in the fields of medicine, aerospace, graphics, robotics, etc. Image segmentation is an important link from image processing to analysis, and the quality of segmentation results directly affects further information processing. Therefore, image segmentation plays a cornerstone role in the entire image processing, and has an important impact on the next step of feature extraction. And repeating elements play an indispensable role in the life of the present invention. Automatically separating repeating elements from images has practical significance, which can reduce the repetitive search work, and treat all similar repeating elements as a whole. , which directly separates the repeated elements.

对图像分割算法的研究已经有几十年的历史,目前已经提出上千种类型的分割算法,由于图像的复杂性,许多分割工作无法依靠计算机自动完成,手工分割工作量过大,标准难统一,由此出现了一些基于交互式的图像分割方法,用来快速定位目标区域,减少计算量。常见的图像分割技术指的是对图像中某一个感兴趣目标对象进行分割提取,同时对多个感兴趣目标进行提取则需要更多的先验知识。The research on image segmentation algorithms has a history of decades, and thousands of types of segmentation algorithms have been proposed. Due to the complexity of images, many segmentation tasks cannot be done automatically by computers. The workload of manual segmentation is too large, and the standards are difficult to unify. , and some interactive-based image segmentation methods have emerged to quickly locate the target area and reduce the amount of computation. Common image segmentation techniques refer to segmenting and extracting an object of interest in an image, while extracting multiple objects of interest at the same time requires more prior knowledge.

与本发明相关的现有技术一是交互式图像分割,Boykov等人提出的Graph cuts是基于组合最优的交互式图像分割算法,这是基于图论的高效图像分割方法,这类算法需要用户指定部分像素为目标和背景,构造加权图,用图中的点表示原图像的像素点,用图像间的相邻关系作为边,转化为求最小割问题。Lazy Snapping懒人系统则对图割算法进行了改进,它同时具有基于区域和边缘分割算法的优点,高效准确,需要用户提供前景和背景像素信息。Cheng提出的Grabcut算法,优化了经典图割算法,交互手段从笔刷改成了矩形框,将矩形框外定义为背景,只对矩形框内的像素进行分割,一定程度上弥补了图割算法的耗时弊端。Jifeng提出了一种基于区域合并的交互式图像分割方法。用户只需要使用笔画(称为标记)粗略地指示对象和背景的位置和区域。基于最大相似性的区域融合机制,利用标记来指导融合过程。One of the prior art related to the present invention is interactive image segmentation. The Graph cuts proposed by Boykov et al. are an interactive image segmentation algorithm based on optimal combination, which is an efficient image segmentation method based on graph theory. Specify some pixels as the target and background, construct a weighted graph, use the points in the graph to represent the pixels of the original image, and use the adjacent relationship between the images as the edge, which is transformed into a minimum cut problem. The Lazy Snapping system improves the graph cutting algorithm. It also has the advantages of region and edge segmentation algorithms. It is efficient and accurate, and requires users to provide foreground and background pixel information. The Grabcut algorithm proposed by Cheng optimizes the classic graph cut algorithm. The interactive method is changed from a brush to a rectangular frame. The outside of the rectangular frame is defined as the background, and only the pixels inside the rectangular frame are segmented, which makes up for the graph cut algorithm to a certain extent. time-consuming disadvantages. Jifeng proposed an interactive image segmentation method based on region merging. Users only need to use strokes (called markers) to roughly indicate the location and area of objects and background. The maximum similarity-based region fusion mechanism utilizes markers to guide the fusion process.

现有技术一的缺点是:现有的基于交互的图像分割技术往往过于依赖用户的输入信息,需要用户指定部分前景和背景种子,运用在多个物体分割情况下,不可避免的必须由用户逐个输入前景和背景,过于繁琐。The first disadvantage of the prior art is that the existing interaction-based image segmentation technology often relies too much on the user's input information, and requires the user to specify some foreground and background seeds. Entering the foreground and background is too cumbersome.

与本发明相关的现有技术二是重复元素检测,重复元素具有极为相似的纹理、颜色、亮度以及几何特征。Cheng等人充分利用了重复元素的几何特征关系,用模板全局匹配图像中的重复元素。而Huang等人通过考虑重复元素之间的外观相似性,基于马尔可夫随机场模型来设计新的优化模型,增加了新的平滑项用于解决同时分割位于图像不同位置的重复元素。Leung等人首先检测图像中的感兴趣元素,将这些元素与它们的周围像素进行匹配,估计它们之间的仿射变换,从而将相似的图像子块扩张成更大的连通域以完成快速聚类。The second prior art related to the present invention is the detection of repeated elements. The repeated elements have very similar texture, color, brightness and geometric characteristics. Cheng et al. take full advantage of the geometric feature relationship of repeating elements and use templates to globally match repeating elements in images. By considering the appearance similarity between repeated elements, Huang et al. designed a new optimization model based on the Markov random field model, adding a new smoothing term to solve the simultaneous segmentation of repeated elements located at different positions in the image. Leung et al. first detect the elements of interest in the image, match these elements with their surrounding pixels, estimate the affine transformation between them, and expand similar image sub-blocks into a larger connected domain to complete fast clustering. kind.

2006年引入了共同分割的概念,即同时分割不同图像中的公共部分。这个分割模型中的推断导致利用编码空间相干性的MRF项和尝试匹配公共部分的外观直方图的全局约束来最小化能量,达到分割的结果。Hochbaum提出的模型以直接方式绕过直方图差异的测量,这是与最初协同分割有所区别的地方。Armand等人提出了一种用于图像协同分割的判别聚类框架,其能够使用来自相同对象类的若干图像中共有的信息来改进所有图像的分割。判别聚类很好地适应了协同分割问题,可以重用现有的特征进行监督分类或检测。In 2006, the concept of co-segmentation was introduced, i.e. the simultaneous segmentation of common parts in different images. The inference in this segmentation model leads to the use of MRF terms that encode spatial coherence and global constraints that try to match the appearance histogram of the common part to minimize the energy to achieve the segmentation result. The model proposed by Hochbaum bypasses the measurement of histogram differences in a straightforward manner, which is where it differs from the original collaborative segmentation. Armand et al. propose a discriminative clustering framework for image co-segmentation, which is able to improve the segmentation of all images using information common in several images from the same object class. Discriminant clustering is well adapted to collaborative segmentation problems and can reuse existing features for supervised classification or detection.

现有技术二的缺点是:Cheng等人从重复元素的几何特征入手,虽然匹配到的结果与模板十分相似,但从输入层面上来说,用于匹配的模板在其中起着至关重要的作用,稍有偏差的边缘就可能造成结果的不一致。Huang等人的方法实质上优化了graph cut图割算法,使之适用于重复元素分割,缺乏对重复元素的观察,并且也要求用户设置前景和背景像素。The disadvantage of the existing technology 2 is: Cheng et al. started with the geometric features of repeated elements. Although the matching results are very similar to the template, from the input level, the template used for matching plays a crucial role in it. , a slightly deviated edge may cause inconsistent results. The method of Huang et al. essentially optimizes the graph cut algorithm, making it suitable for repeated element segmentation, lacks observation of repeated elements, and also requires the user to set foreground and background pixels.

与本发明相关的现有技术三是感兴趣区域选择,Paint selection算法提供一个类似画刷的鼠标操作,让用户在图像上用笔刷选出感兴趣的目标,并且提供实时的反馈,直接由用户来设置任意的感兴趣区域。Xu等人(Lazy selection)通过分析用户输入的笔刷,筛选出最符合用户意向的一些区域组合,从用户给定的信息出发,揭露用户最感兴趣的区域。The third prior art related to the present invention is the selection of the region of interest. The Paint selection algorithm provides a mouse operation similar to a paintbrush, allowing the user to select the target of interest with the brush on the image, and provide real-time feedback, directly from the Users can set arbitrary regions of interest. Xu et al. (Lazy selection) screened out some combination of regions that most conform to the user's intention by analyzing the brush input by the user, and based on the information given by the user, revealed the region that the user is most interested in.

现有技术三的缺点是:Paint selection不支持同时对多个连通区域进行选择,无法用在重复元素的选择提取上,并且同时选择多个区域不得不耗费许多精力。Lazyselection提取的多个区域并不都是重复元素,其中可能包含了形状差异的一些闭合区域,未考虑重复元素的特殊性质以及在图像上的复杂情况。The disadvantage of the third prior art is: Paint selection does not support the selection of multiple connected regions at the same time, and cannot be used for selection and extraction of repeated elements, and selecting multiple regions at the same time has to consume a lot of energy. The multiple regions extracted by Lazyselection are not all repeated elements, which may contain some closed regions with different shapes. The special properties of repeated elements and the complexity of the image are not considered.

目前所有的研究,都需要用户提供关于重复元素的信息,用于在全图中匹配,也就是说,现有的重复元素提取技术,大多数需要用户指定一个原始模板,用于在全局中对轮廓进行搜索,又或者用户在重复元素上绘制,将其设置成前景像素,再用笔刷设置背景像素,通过图割算法,分割出重复物体,这些工作或多或少都提供了重复元素本身的颜色信息或者轮廓信息先验知识,这些都需要用户手动输入,手段过于繁杂。All current research requires users to provide information about repeated elements for matching in the whole graph. That is to say, most of the existing repeated element extraction techniques require users to specify an original template for global matching. Contours are searched, or the user draws on repeated elements, sets them as foreground pixels, and then sets background pixels with brushes. Through the graph cutting algorithm, the repeated objects are segmented. These works more or less provide the repeated elements themselves. The prior knowledge of color information or contour information requires manual input by the user, and the means are too complicated.

因此,现有技术还有待于改进和发展。Therefore, the existing technology still needs to be improved and developed.

发明内容SUMMARY OF THE INVENTION

本发明要解决的技术问题在于,针对现有技术中在进行图像重复元素分割时,需要用户提供用于全图匹配的关于重复元素的信息的问题,本发明提供一种图像重复元素分割方法、智能设备及存储介质,通过简单的交互快速分割出图像中的重复元素,给定一幅包含有重复元素的彩色图像,用户简单粗略的拖动笔刷,从中迅速分割出用户所期望的重复元素,本发明的重点在于前景区域的大概估计以及重复元素的检测,运用图像分割算法将前景部分的重复元素与背景区分开。The technical problem to be solved by the present invention is that, in the prior art, when performing image repeated element segmentation, the user needs to provide information about repeated elements for full image matching, the present invention provides an image repeated element segmentation method, Smart devices and storage media can quickly segment repetitive elements in an image through simple interaction. Given a color image containing repetitive elements, the user can simply and roughly drag the brush to quickly segment the repetitive elements expected by the user. , the focus of the present invention lies in the approximate estimation of the foreground area and the detection of repeated elements, and the image segmentation algorithm is used to distinguish the repeated elements in the foreground part from the background.

本发明解决技术问题所采用的技术方案如下:The technical scheme adopted by the present invention to solve the technical problem is as follows:

一种图像重复元素分割方法,其中,所述图像重复元素分割方法包括:A method for segmenting repetitive elements of an image, wherein the method for segmenting repetitive elements of an image comprises:

读入图像,经过线性迭代聚类对图像进行预处理分组,在相邻像素之间形成具有相似纹理,颜色以及亮度的像素块;Read in the image, preprocess and group the image through linear iterative clustering, and form pixel blocks with similar texture, color and brightness between adjacent pixels;

图像形成超像素后,生成前景区域,在聚类的过程中,根据涵盖了超像素的颜色距离,空间距离以及超像素到用户输入的路径中心的距离来进行超像素合并,形成连通区域;After the image is formed into superpixels, a foreground area is generated. In the process of clustering, superpixels are merged according to the color distance, spatial distance and distance from the superpixel to the center of the path input by the user, forming a connected area;

将聚类结果自上而下扩散到每个超像素上,为每个聚类结果分配不同的权重信息,得到每个超像素的匹配误差即前景概率;Spread the clustering results to each superpixel from top to bottom, assign different weight information to each clustering result, and obtain the matching error of each superpixel, that is, the foreground probability;

将前景超像素颜色信息作为数据输入,以超像素之间的颜色相似度作为相似矩阵,通过不断迭代的方式筛选出所有样本中能够成为代表样本的数据。The foreground superpixel color information is used as data input, and the color similarity between superpixels is used as a similarity matrix, and the data that can become representative samples in all samples are screened out through continuous iteration.

所述的图像重复元素分割方法,其中,所述读入图像,经过线性迭代聚类对图像进行预处理分组,在相邻像素之间形成具有相似纹理,颜色以及亮度的像素块的步骤,包括:The method for segmenting repeated elements of an image, wherein the step of reading in the image, pre-processing the image into groups through linear iterative clustering, and forming pixel blocks with similar texture, color and brightness between adjacent pixels, comprising: :

将每个像素的空间位置特征和颜色特征结合,形成一个五维的特征向量Fk{lk,ak,bk,xk,yk};Combine the spatial position feature and color feature of each pixel to form a five-dimensional feature vector Fk{lk,ak,bk,xk,yk};

其中l、a、b属性分别对应像素在LAB空间上的值,x,y代表像素点在图像中的坐标位置,k表示每一个像素;The l, a, and b attributes correspond to the value of the pixel in the LAB space, respectively, x, y represent the coordinate position of the pixel in the image, and k represents each pixel;

随机选择N个种子点,N是由超像素的尺寸或个数决定,在指定范围内搜索与种子点距离最接近的像素,所述距离是像素之间的相似距离D:Randomly select N seed points, N is determined by the size or number of superpixels, and search for the pixel with the closest distance to the seed point within the specified range, which is the similarity distance D between pixels:

其中,ds和dc分别代表像素之间的空间距离以及在CIELAB空间上的颜色距离差,用m和s协调这两个属性之间的关系;参数s由XY空间的最大的可能值计算得来,参数m代表LAB空间上距离的最大可能值;Among them, d s and d c represent the spatial distance between pixels and the color distance difference in CIELAB space, respectively, and use m and s to coordinate the relationship between these two attributes; the parameter s is calculated from the largest possible value in the XY space It is obtained that the parameter m represents the maximum possible value of the distance on the LAB space;

所述距离由像素点的特征决定;The distance is determined by the characteristics of the pixel point;

将相似的像素归为一类,为图像中的每个像素点都找到对应的归类,再对每个类当中的所有像素点求新的平均聚类中心,重新迭代得到新的N个超像素直到收敛为止。Classify similar pixels into a class, find the corresponding classification for each pixel in the image, then find a new average cluster center for all pixels in each class, and re-iterate to get new N super pixels until convergence.

所述的图像重复元素分割方法,其中,所述m取值范围为[1,40]。In the image repeating element segmentation method, the m value range is [1, 40].

所述的图像重复元素分割方法,其中,所述图像形成超像素后,生成前景区域,在聚类的过程中,根据涵盖了超像素的颜色距离,空间距离以及超像素到用户输入的路径中心的距离来进行超像素合并,形成连通区域的步骤,包括:The described image repeating element segmentation method, wherein, after the image is formed into a superpixel, a foreground area is generated, and in the process of clustering, according to the color distance, the spatial distance and the superpixel that covers the superpixel to the center of the path input by the user The steps of superpixel merging to form a connected region include:

将所有的超像素自底向上合并,直到所有的超像素都归为一类,合并规则遵守最近距离优先合并;Merge all superpixels from bottom to top until all superpixels are classified into one category, and the merging rule follows the closest distance priority merging;

所述最近距离指的是涵盖超像素的颜色距离、空间距离以及超像素到用户输入的路径中心的距离;The closest distance refers to the color distance, spatial distance and the distance from the superpixel to the center of the path input by the user;

从最底层的每个超像素开始,选择相似度最接近的两个超像素聚合成簇,计算两个簇相似度:Starting from each superpixel at the bottom layer, two superpixels with the closest similarity are selected to be aggregated into clusters, and the similarity of the two clusters is calculated:

Gi,j=dc·σ(dP)·σ(ds);G i,j =d c ·σ(d P ) ·σ(d s );

其中,Gi,j用来度量两个簇之间的相似度,代表两个簇之间的距离,dc是RGB颜色空间中的欧几里德距离,dP表示两个簇在空间上的距离,定义为两个簇之间的中心点距离,ds则代表两个簇到笔刷的平均距离。Among them, G i, j are used to measure the similarity between two clusters, representing the distance between the two clusters, d c is the Euclidean distance in the RGB color space, and d P represents the space between the two clusters The distance is defined as the center point distance between the two clusters, and d s represents the average distance between the two clusters to the brush.

所述的图像重复元素分割方法,其中,控制超像素在笔刷的宽度内具有更高权重,将ds值替换为ds/wi,wi是笔刷宽度,σ(·)是指Sigmoid函数,用于限制簇的空间距离对总体的影响;The described image repeating element segmentation method, wherein the control superpixel has a higher weight within the width of the brush, and the value of d s is replaced by d s / wi , where w i is the width of the brush, and σ( ) refers to Sigmoid function, which is used to limit the influence of the spatial distance of clusters on the population;

如果dP和ds趋于正无穷大,则σ(·)映射的值趋于1。The value of the σ(·) map tends to 1 if d P and d s tend to positive infinity.

所述的图像重复元素分割方法,其中,从空间分布和距离的角度分析每个簇与笔刷区域的匹配程度:The described image repeating element segmentation method, wherein the matching degree of each cluster and the brush area is analyzed from the perspective of spatial distribution and distance:

R=β(μp→ss→p)+(1-β)(σp→ss→p)R=β(μ p→ss→p )+(1-β)(σ p→ss→p )

其中,μp→s和μs→p表示测量笔刷与每个簇在位置上的匹配程度,σp→s和σs→p表示每个簇与笔刷在分布上的匹配能力,引入β参数来平衡这两种能力;μp→s表示从采样点到候选中的笔刷的平均最近距离,μs→p表示从候选中的笔刷到采样点的平均最近距离,σp→s表示从采样点到笔刷的最近距离的标准偏差,σs→p表示从笔刷到采样点的最近距离的标准偏差,β的取值为0.5。Among them, μ p→s and μ s→p represent the matching degree between the measurement brush and each cluster in position, σ p→s and σ s→p represent the matching ability of each cluster and the brush in distribution, introduce β parameter to balance these two capabilities; μ p→s represents the average closest distance from the sampling point to the brush in the candidate, μ s→p represents the average closest distance from the brush in the candidate to the sampling point, σ p→ s represents the standard deviation of the closest distance from the sampling point to the brush, σ s→p represents the standard deviation of the closest distance from the brush to the sampling point, and the value of β is 0.5.

所述的图像重复元素分割方法,其中,所述将聚类结果自上而下扩散到每个超像素上,为每个聚类结果分配不同的权重信息,得到每个超像素的匹配误差即前景概率的步骤,包括:The described image repeated element segmentation method, wherein, the clustering result is spread to each superpixel from top to bottom, and different weight information is assigned to each clustering result, and the matching error of each superpixel is obtained. The steps of foreground probability, including:

基于超像素分割的基础,从每个超像素中均匀采样一些固定数量的点作为代表,新组成的簇就由它内部所包含的所有采样点构成;Based on the basis of superpixel segmentation, a fixed number of points are uniformly sampled from each superpixel as representatives, and the newly formed cluster is composed of all the sampling points contained within it;

将所有的超像素分数归一化到[0,255],分数接近255就越接近前景;Normalize all superpixel scores to [0, 255], the closer the score is to 255, the closer it is to the foreground;

计算每个簇中的超像素的前景表现力:Compute the foreground expressiveness of the superpixels in each cluster:

在合并过程中存在超像素逐渐与背景区域归并,匹配误差逐渐增大;During the merging process, the superpixels are gradually merged with the background area, and the matching error gradually increases;

在合并过程中存在超像素逐渐与前景区域归并,匹配误差呈现减小趋势;During the merging process, the superpixels are gradually merged with the foreground area, and the matching error shows a decreasing trend;

其中,Ri第i个簇的前景参考值,wik是指包含第k个超像素的第i个簇的权重,权重取值为第i个簇的像素总数,n的值对于每个超像素不是固定的;将所有超像素表现力分数归一化为[0,255],并且接近255的分数接近前景,而0接近背景。Among them, Ri is the foreground reference value of the ith cluster, w ik refers to the weight of the ith cluster containing the kth superpixel, and the weight is the total number of pixels of the ith cluster, and the value of n is for each superpixel. Pixels are not fixed; all superpixel expressiveness scores are normalized to [0, 255], and scores close to 255 are close to the foreground, while 0 is close to the background.

所述的图像重复元素分割方法,其中,所述将前景超像素颜色信息作为数据输入,以超像素之间的颜色相似度作为相似矩阵,通过不断迭代的方式筛选出所有样本中能够成为代表样本的数据的步骤包括:The described image repeating element segmentation method, wherein, the foreground superpixel color information is used as data input, and the color similarity between superpixels is used as a similarity matrix, and all samples that can become representative samples are screened out in a continuous iterative manner. The data steps include:

从所有数据中找到最能够代表自身的样例点,通过吸引度信息和归属度信息来衡量它们之间的匹配度,并且同时选择出具有代表性的样本;Find the most representative sample points from all the data, measure the matching degree between them through attractiveness information and attribution information, and select representative samples at the same time;

每一次迭代计算每个样本之间的吸引度信息r(i,k)和归属度信息a(i,k),计算公式如下:Each iteration calculates the attractiveness information r(i, k) and the attribution information a(i, k) between each sample, and the calculation formula is as follows:

a(i,k)←min[0,r(k,k)+∑i′∈{i,k}max[0,r(i′,k)]];a(i,k)←min[0,r(k,k)+∑ i′∈{i,k}max [0,r(i′,k)]];

其中,i表示样本,i′表示其他样本,k表示候选样本中心,k′表示其他候选样本中心,s(i,k)和r(i,k′)表示不同候选样本中心的相似度矩阵,r(k,k)反映的是节点k有多不适合被划分到其他聚类中心,r(i′,k)表示除了r(i,k)和r(k,k)以外的值,代表节点k对其他节点的吸引度;Among them, i represents the sample, i' represents other samples, k represents the candidate sample center, k' represents other candidate sample centers, s(i, k) and r(i, k') represent the similarity matrix of different candidate sample centers, r(k, k) reflects how unsuitable node k is to be divided into other cluster centers, r(i', k) represents values other than r(i, k) and r(k, k), representing The attractiveness of node k to other nodes;

经过每一轮计算之后,添加阻尼系数,将每一次的结果与上一次的结果线性组合,替代为新的信息值。After each round of calculation, the damping coefficient is added, and the result of each time is linearly combined with the previous result, and replaced with the new information value.

一种智能设备,其中,所述智能设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的图像重复元素分割程序,所述图像重复元素分割程序被所述处理器执行时实现如上所述的图像重复元素分割方法的步骤。An intelligent device, wherein the intelligent device comprises: a memory, a processor, and an image repeating element segmentation program stored on the memory and executable on the processor, the image repeating element segmentation program being executed by the The processor implements the steps of the image repeating element segmentation method as described above when executed.

一种存储介质,其中,所述存储介质存储有图像重复元素分割程序,所述图像重复元素分割程序被处理器执行时实现如上所述的图像重复元素分割方法的步骤。A storage medium, wherein the storage medium stores an image repeating element segmentation program, and when the image repeating element segmentation program is executed by a processor, the steps of the above-mentioned image repeating element segmentation method are implemented.

本发明通过简单的交互快速分割出图像中的重复元素,给定一幅包含有重复元素的彩色图像,用户简单粗略的拖动笔刷,从中迅速分割出用户所期望的重复元素,本发明的重点在于前景区域的大概估计以及重复元素的检测,运用图像分割算法将前景部分的重复元素与背景区分开。The present invention quickly divides the repeated elements in the image through simple interaction. Given a color image containing repeated elements, the user simply and roughly drags the brush to quickly segment the repeated elements expected by the user. The focus is on the approximate estimation of the foreground area and the detection of repeated elements, and the image segmentation algorithm is used to distinguish the repeated elements in the foreground part from the background.

附图说明Description of drawings

图1是本发明图像重复元素分割方法中选取的一幅图像表示图像中的重复元素的示意图;1 is a schematic diagram of an image selected in the image repeating element segmentation method of the present invention to represent the repeating elements in the image;

图2是本发明图像重复元素分割方法的较佳实施例的流程图;Fig. 2 is the flow chart of the preferred embodiment of the image repetition element segmentation method of the present invention;

图3是本发明图像重复元素分割方法中用户交互示意图和前景提取图;3 is a schematic diagram of user interaction and a foreground extraction diagram in the image repeating element segmentation method of the present invention;

图4是本发明图像重复元素分割方法中超像素聚类过程(聚类树)分图示;Fig. 4 is the superpixel clustering process (clustering tree) part diagram in the image repetition element segmentation method of the present invention;

图5是本发明图像重复元素分割方法中笔刷关系示意图;5 is a schematic diagram of the relationship between brushes in the image repeating element segmentation method of the present invention;

图6是本发明图像重复元素分割方法中前景预测图;Fig. 6 is the foreground prediction diagram in the image repetition element segmentation method of the present invention;

图7是本发明图像重复元素分割方法中多组重复元素前景图;7 is a foreground diagram of multiple groups of repeated elements in the image repeating element segmentation method of the present invention;

图8是本发明图像重复元素分割方法中信息传播机制示意图;8 is a schematic diagram of an information dissemination mechanism in the image repeating element segmentation method of the present invention;

图9是本发明图像重复元素分割方法中实验图及二值化真值图;Fig. 9 is the experimental graph and the binarized true value graph in the image repetition element segmentation method of the present invention;

图10是本发明图像重复元素分割方法中重复元素提取耗时比较的示意图;10 is a schematic diagram of the time-consuming comparison of repeated element extraction in the image repeated element segmentation method of the present invention;

图11是本发明图像重复元素分割方法中不同算法的准确率对比的示意图;11 is a schematic diagram of the accuracy comparison of different algorithms in the image repeating element segmentation method of the present invention;

图12是本发明图像重复元素分割方法中不同算法的用户操作数比较的示意图;12 is a schematic diagram of the comparison of the number of user operations of different algorithms in the image repeating element segmentation method of the present invention;

图13是本发明图像重复元素分割方法中快速编辑重复元素的图片示意图;Fig. 13 is the picture schematic diagram of rapidly editing repeated elements in the image repeating element segmentation method of the present invention;

图14为本发明智能设备的较佳实施例的运行环境示意图。FIG. 14 is a schematic diagram of the operating environment of the preferred embodiment of the smart device of the present invention.

具体实施方式Detailed ways

为使本发明的目的、技术方案及优点更加清楚、明确,以下参照附图并举实施例对本发明进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the objectives, technical solutions and advantages of the present invention clearer and clearer, the present invention will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.

本发明提出了一种更加自动化提取重复元素的框架,可用于快速查找重复元素,本发明的主要内容是通过简单交互快速提取图像中的重复元素,给定一幅带有重复元素的彩色图像,用户粗略的选择重复元素所在的区域,便能从中得到精确的重复元素,如图1中左图高亮的部分(图1中表示两个柠檬的部分),并且在很大程度上符合用户的期望。The present invention proposes a framework for more automatic extraction of repeated elements, which can be used to quickly find repeated elements. The main content of the present invention is to quickly extract repeated elements in an image through simple interaction. Given a color image with repeated elements, The user roughly selects the area where the repeated element is located, and can get the exact repeated element from it, such as the highlighted part on the left in Figure 1 (the part representing two lemons in Figure 1), which largely conforms to the user's expect.

运用超像素为算法提供加速,并且沿着这个思路将所有超像素再一次聚类,这是为了从毫无语义信息的超像素中获取粗糙前景的像素信息。依照重复元素的性质和用户交互的笔刷信息,从三个方面衡量两个超像素的相似距离/相似度,超像素之间的颜色差值,空间距离以及与用户绘制的线条之间的距离关系共同决定两个超像素之间的相似度,再对每一步聚类结果进行评价,评判的标准是这些聚类后的簇与交互信息是否匹配,即这些聚合而成的结果是否符合用户输入的交互提示。Using superpixels to speed up the algorithm, and clustering all superpixels again along this line, is to obtain rough foreground pixel information from superpixels that have no semantic information. According to the nature of repeated elements and the brush information of user interaction, the similarity distance/similarity of two superpixels is measured from three aspects, the color difference between superpixels, the spatial distance, and the distance between the lines drawn by the user and the user. The relationship jointly determines the similarity between the two superpixels, and then evaluates the clustering results of each step. The evaluation criterion is whether the clustered clusters match the interactive information, that is, whether the aggregated results conform to the user input. interactive prompts.

根据这些最终确定含有重复元素的前景部分,对于更加复杂的情况,有时在用户绘制时,不可避免的出现另一组或更多组重复元素如图1中右图,原来的算法无法分辨出哪一组才是用户所期望的,则会陷入混乱。那么据此,采用了信息传播机制的相似性传播算法再对复杂前景分类,得到各自独立的每组重复元素。基于简单交互的过程,用户就能体验到一次性分割重复元素的结果,节省了大量的时间和精力。According to the final determination of the foreground part containing repeated elements, for more complex situations, sometimes when the user draws, another group or more groups of repeated elements inevitably appear as shown on the right in Figure 1. The original algorithm cannot distinguish which One group is what the user expects, and there will be chaos. Then, according to this, the similarity propagation algorithm using the information propagation mechanism is used to classify the complex foregrounds, and each group of repeated elements is obtained independently. Based on a simple interaction process, users can experience the result of segmenting repeated elements at one time, saving a lot of time and effort.

本发明的重点在于从这些简单的交互中推断出前景内容的相关先验知识。图像中有一种极为重要的视觉信息,即重复元素,图像中的重复元素在本发明中定义为,具有相似纹理、颜色、亮度以及其他几何特征的像素集合。重复元素在生活中十分普遍,这类型物体的检测是计算机视觉,几何处理和计算机对称性分析的一个重要研究课题。The focus of the present invention is to infer relevant prior knowledge of foreground content from these simple interactions. There is an extremely important visual information in an image, that is, repeated elements. The repeated elements in an image are defined in the present invention as a collection of pixels with similar texture, color, brightness and other geometric features. Repeating elements are very common in life, and the detection of this type of objects is an important research topic in computer vision, geometric processing and computer symmetry analysis.

常见的图像分割技术只针对于图像中的某个感兴趣目标对象进行提取,分离出一个连通区域形成前景,而重复元素作为独立的个体存在,简单的运用原始前景分割方法很难提取出完整的重复元素。本发明研究的则是多个感兴趣目标,是在外观上具有相似性的重复元素。由于视觉仍然是本发明分辨信息的主要来源,如何让机器模拟人的角度看世界使得研究更加具有实际意义。由于互相重叠、光照变化、物体变形等因素的影响,认知这类图像中的重复元素本身就是一个极具挑战的任务,计算机具备比人眼更加客观的分析能力,本发明希望能够探索出真正可以分割出完整的重复物体,忽略人眼所看到的误差。The common image segmentation technology only extracts a certain target object of interest in the image, separates a connected area to form the foreground, and the repeated elements exist as independent individuals, and it is difficult to extract the complete image by simply using the original foreground segmentation method. Repeat elements. The present invention studies multiple objects of interest, repeating elements that have similar appearances. Since vision is still the main source of distinguishing information in the present invention, how to make the machine look at the world from the perspective of a human makes the research more meaningful. Due to the influence of overlapping, illumination changes, object deformation and other factors, recognizing repeated elements in such images is a very challenging task in itself. Computers have more objective analysis capabilities than human eyes. The present invention hopes to explore true Complete duplicate objects can be segmented, ignoring errors seen by the human eye.

这些重复元素在高级图像编辑中占据重要的位置,极大地提高图像编辑的效率。区别于图像的显著性区域,并不总是分布在图像中最显眼的位置,并不能被人们一眼捕捉到。重复元素可能会散落分布在整个图像的不同位置甚至是角落,形成无规律的不连通的多个区域,因此超出了心理学,神经科学等领域的研究范畴,无法从整幅图像的位置信息把握重复元素的分布规律。These repetitive elements occupy an important position in advanced image editing, greatly improving the efficiency of image editing. Different from the saliency area of the image, it is not always distributed in the most conspicuous position in the image and cannot be captured by people at a glance. Repeated elements may be scattered in different positions or even corners of the entire image, forming irregular and disconnected areas, so it is beyond the scope of research in the fields of psychology and neuroscience, and cannot be grasped from the position information of the entire image. The distribution of repeating elements.

因此,本发明提出了一种更加简便地提取重复元素的框架,可用于快速查找重复元素,本发明的主要内容是通过简单交互快速提取图像中的重复元素,给定一幅带有重复元素的彩色图像,用户粗略的选择重复元素所在的区域,本发明便能从中得到精确的一组重复元素,并且在很大程度上符合用户的期望。Therefore, the present invention proposes a more convenient framework for extracting repeated elements, which can be used to quickly find repeated elements. The main content of the present invention is to quickly extract repeated elements in an image through simple interaction. For a color image, the user roughly selects the area where the repeated elements are located, and the present invention can obtain a precise set of repeated elements from it, which meets the user's expectations to a large extent.

具体地,本发明较佳实施例所述的图像重复元素分割方法,如图2所示,所述图像重复元素分割方法包括以下步骤:Specifically, the image repeating element segmentation method according to the preferred embodiment of the present invention, as shown in FIG. 2 , the image repeating element segmentation method includes the following steps:

步骤S10、读入图像,经过线性迭代聚类对图像进行预处理分组,在相邻像素之间形成具有相似纹理,颜色以及亮度的像素块。Step S10: Read in the image, and perform preprocessing and grouping on the image through linear iterative clustering, and form pixel blocks with similar texture, color and brightness between adjacent pixels.

在本发明中使用了聚类以及分类算法,前者是为了从毫无语义信息的超像素中提取粗糙前景,后者是对复杂多样的前景进行细化处理得到完整的重复元素个体甚至是多种不同种类的重复元素。如图3左图所示,用户通过笔刷粗糙绘制重复元素所在的区域,本发明从这些重复元素的颜色外观,位置分布信息提取出包含重复元素的前景信息(如图3右图所示)。In the present invention, clustering and classification algorithms are used, the former is to extract rough foreground from superpixels without semantic information, and the latter is to refine complex and diverse foregrounds to obtain complete repetitive element individuals or even a variety of Different kinds of repeating elements. As shown in the left figure of Figure 3, the user roughly draws the area where the repeated elements are located with a brush, and the present invention extracts the foreground information containing the repeated elements from the color appearance and position distribution information of these repeated elements (as shown in the right figure of Figure 3) .

本发明的主要内容是通过简单交互快速提取图像中的重复元素,给定一幅带有重复元素的彩色图像,用户粗略的选择这些区域,便能从中得到精确的重复元素,并且在很大程度上符合用户的期望。The main content of the present invention is to quickly extract the repeated elements in the image through simple interaction. Given a color image with repeated elements, the user can roughly select these areas, and then the precise repeated elements can be obtained, and to a large extent meet user expectations.

通过简单交互快速分割出重复元素,首先,为了加速算法,对图像进行了预处理,使用超像素代替单个像素,形成具有相似纹理,颜色以及亮度等特征的相邻像素构成的有一定视觉意义的不规则像素块。为了匹配不同的图像,针对不同的笔刷输入调整生成的超像素尺寸,这样做最大化展现了超像素的优势。Quickly segment repeating elements through simple interaction. First, in order to speed up the algorithm, the image is preprocessed, and superpixels are used to replace single pixels to form adjacent pixels with similar texture, color, and brightness characteristics that have certain visual significance. Irregular pixel blocks. In order to match different images, the size of the generated superpixels is adjusted for different brush inputs, which maximizes the advantages of superpixels.

超像素是图像过分割的一种表现,首先,将每个像素的空间位置特征和颜色特征结合,形成一个五维的特征向量Fk{lk,ak,bk,xk,yk},其中前三个l,a,b属性分别对应像素在LAB(颜色模型)空间上的值,x,y代表像素点在图像中的坐标位置,k表示每一个像素。先从中随机选择N个种子点,N是由超像素的尺寸或个数决定,在指定范围内搜索与种子点距离最接近的像素,这里的距离不是L1,L2距离,而是像素之间的相似距离D(见如下公式),由像素点的特征决定。将相似的这些像素归为一类,为图像中的每个像素点都找到它的归类,再对每个类当中的所有像素点求新的平均聚类中心,重新迭代得到新的N个超像素直到收敛为止。Superpixel is a manifestation of image over-segmentation. First, the spatial position feature and color feature of each pixel are combined to form a five-dimensional feature vector Fk{lk,ak,bk,xk,yk}, where the first three The l, a, b attributes correspond to the value of the pixel in the LAB (color model) space, respectively, x, y represent the coordinate position of the pixel in the image, and k represents each pixel. First randomly select N seed points, N is determined by the size or number of superpixels, and search for the pixel closest to the seed point within the specified range. The distance here is not the L1, L2 distance, but the distance between the pixels. The similarity distance D (see the formula below) is determined by the characteristics of the pixel points. Classify these similar pixels into a class, find its classification for each pixel in the image, and then find a new average cluster center for all pixels in each class, and re-iterate to get new N superpixels until convergence.

其中,ds和dc分别代表像素之间的空间距离以及在CIELAB空间上的颜色距离差,用m和s协调这两个属性之间的关系。参数s由XY空间的最大的可能值计算得来,参数m代表LAB空间上距离的最大可能值,m越小超像素形状越不规则,与图像边界贴合更加紧密,可取的范围为[1,40],本发明中选择m参数为10。Among them, d s and d c represent the spatial distance between pixels and the color distance difference in CIELAB space, respectively, and use m and s to coordinate the relationship between these two attributes. The parameter s is calculated from the maximum possible value of the XY space, and the parameter m represents the maximum possible value of the distance in the LAB space. The smaller m is, the more irregular the superpixel shape is, and the more closely fit the image boundary, the desirable range is [1 , 40], the m parameter is selected to be 10 in the present invention.

步骤S20、图像形成超像素后,生成前景区域,在聚类的过程中,根据涵盖了超像素的颜色距离,空间距离以及超像素到用户输入的路径中心的距离来进行超像素合并,形成连通区域。In step S20, after the image is formed into a superpixel, a foreground area is generated, and in the process of clustering, the superpixel is merged according to the color distance, the spatial distance and the distance from the superpixel to the center of the path input by the user, which covers the superpixel to form a connection. area.

基于交互的图像分割,或多或少都从用户输入的区域得到前景或背景信息。在矢量图上,分析用户涂鸦部分涵盖的各个形状元素的成组误差,提取出最符合用户期望的一组形状元素。从涂鸦部分或完全覆盖的形状元素作为输入,对元素进行重组建立元素候选集,从覆盖的所有元素内选择出重组的最佳子集。对视觉感知中的接近性,形状的相似性以及共同区域之间的相互作用进行建模,定义了两个元素之间的成组误差,采用自底向上的方法迭代地对具有最小成组误差的元素进行组合,直至所有元素都被囊括在同一个组里。Interaction-based image segmentation, more or less, derives foreground or background information from user input regions. On the vector diagram, analyze the group error of each shape element covered by the user's graffiti part, and extract a group of shape elements that best meet the user's expectations. From the partially or completely covered shape elements of the graffiti as input, the elements are reorganized to establish an element candidate set, and the best subset of reorganization is selected from all the covered elements. Modeling proximity in visual perception, similarity of shapes, and interactions between common regions, defines the grouping error between two elements, using a bottom-up approach iteratively quantifies the grouping error with the smallest grouping error elements are grouped until all elements are included in the same group.

图像形成超像素后,需要进一步将凝聚成有意义的区域,从中产生前景区域。在聚类的过程中,首先考虑到超像素之间的颜色距离,空间距离,希望聚类时优先将空间距离近的相似超像素合并,形成连通区域。超像素合并的相似性度量由颜色距离差dc,空间距离ds共同决定,以此衡量被合并的两个超像素之间的相似度。更进一步考虑,基于观察显示,用户偏向于绘制接近元素的曲线路径(下文统称为笔刷),因此又将超像素到笔刷之间最短距离再纳入超像素合并的度量。After the image is formed into superpixels, it needs to be further condensed into meaningful regions, from which foreground regions are generated. In the process of clustering, the color distance and spatial distance between superpixels are first considered. It is hoped that when clustering, similar superpixels with close spatial distances are preferentially merged to form a connected area. The similarity measure of superpixel merging is determined by the color distance difference d c and the spatial distance d s , so as to measure the similarity between the two merged superpixels. Taking it a step further, based on observations, users tend to draw curved paths close to elements (hereinafter collectively referred to as brushes), so the shortest distance between superpixels and brushes is incorporated into the metric of superpixel merging.

先将所有的超像素自底向上合并,直到所有的超像素都归为一类,合并规则遵守最近距离优先合并。上述距离指的是涵盖了超像素的颜色距离,空间距离以及超像素到用户输入的路径中心的距离。该距离度量也可以被称为不相似性,距离越大,相似程度越小,在外观上,决定两个超像素是否相似的直接反应就是颜色,因此颜色距离是判断相似性的最根本因素。假设空间距离相邻的超像素比远距离的超像素更有可能归为同一个簇,这是由图像的相干性决定。除此之外更需要结合用户输入的笔刷,越靠近笔刷中心的超像素应该被优先考虑。图4为超像素聚类过程分图示。First merge all superpixels from bottom to top until all superpixels are classified into one category, and the merging rule follows the closest distance first merging. The above distances refer to the color distances, spatial distances, and the distances from the superpixels to the center of the path entered by the user, covering the superpixels. This distance metric can also be called dissimilarity. The larger the distance, the smaller the degree of similarity. In appearance, the direct response to determine whether two superpixels are similar is color, so color distance is the most fundamental factor for judging similarity. It is assumed that spatially adjacent superpixels are more likely to be grouped into the same cluster than distant ones, which is determined by the coherence of the image. In addition to the brushes that need to be combined with user input, the superpixels closer to the center of the brush should be given priority. Figure 4 is a schematic diagram of the superpixel clustering process.

从最底层的每个超像素开始,选择相似度最接近的两个超像素聚合成簇,图4中除了A-G是只含有单个超像素的簇,其他H-M都为多个超像素聚合而成的簇,是聚类生成的产物,以下公式是计算两个簇相似度:Starting from each superpixel at the bottom, select two superpixels with the closest similarity to aggregate into clusters. In Figure 4, except A-G is a cluster containing only a single superpixel, other H-M are aggregated by multiple superpixels Cluster is the product generated by clustering. The following formula is to calculate the similarity between two clusters:

Gi,j=dc·σ(dp)·σ(ds);G i,j =d c ·σ(d p ) ·σ(d s );

其中,Gi,j度量两个簇之间的相似度代表两个簇之间的距离,dc是RGB颜色空间中的欧几里德距离,把它归一化到[0,1]。dP表示两个簇在空间上的距离,定义为两个簇之间的中心点距离。ds则代表两个簇到笔刷的平均距离。此处的ds和dc与前面所指的是同一个含义,只是不一样的阶段,前面的时候是以每个超像素为单位,那距离就是超像素之间的,合并了一轮以后,就已经演变成了由超像素组合而成的簇,后面也就是求每两个簇之间的距离了。Among them, G i,j measure the similarity between the two clusters and represent the distance between the two clusters, and dc is the Euclidean distance in the RGB color space, normalized to [0, 1]. d P represents the distance between two clusters in space, which is defined as the center point distance between the two clusters. d s represents the average distance of the two clusters from the brush. The d s and d c here have the same meaning as the previous ones, but they are in different stages. In the previous case, each superpixel is used as the unit, and the distance is between the superpixels. After a round of merging , it has evolved into a cluster composed of superpixels, and then the distance between each two clusters is calculated.

为了让超像素在笔刷的宽度内具有更高权重,将ds值替换为ds/wi(wi是笔刷宽度)。σ(·)是指Sigmoid函数,它限制了簇的空间距离对总体的影响。如果dP和ds趋于正无穷大,则σ(·)映射的值趋于1。在这三种不同的距离当中,两个簇之间的相似性度从根本上取决于颜色距离。如图5表示簇与绿色笔刷(图5中的曲线)之间的关系图,灰色的圆(例如实际用蓝色表示,例如A、B、C、F)表示一个簇,灰色圆中的黑色圆(实际用红色表示)则是每个簇的中心,因此A到F的线段和B到C的线段(实际用黄色线段表示)表示两个簇之间的空间距离,灰色线段(A到笔刷的线段和C到笔刷的线段)则是每个簇到笔刷的最近距离(ds)。To give superpixels a higher weight within the width of the brush, replace the d s value with d s / wi ( wi is the brush width). σ( ) refers to the sigmoid function, which limits the influence of the spatial distance of clusters on the population. The value of the σ(·) map tends to 1 if d P and d s tend to positive infinity. Among the three different distances, the similarity between two clusters is fundamentally determined by the color distance. Figure 5 shows the relationship between the cluster and the green brush (the curve in Figure 5), the gray circle (for example, actually represented in blue, such as A, B, C, F) represents a cluster, and the gray circle in the The black circle (actually represented in red) is the center of each cluster, so the line segment from A to F and the line segment from B to C (actually represented by the yellow line segment) represent the spatial distance between the two clusters, and the gray line segment (A to The line segment of the brush and C to the line segment of the brush) are the closest distances (d s ) of each cluster to the brush.

从空间分布和距离的角度分析每个簇与笔刷区域的匹配程度。Analyze how well each cluster matches the brush area from the perspective of spatial distribution and distance.

R=β(μp→ss→p)+(1-β)(σp→ss→p);R=β(μ p→ss→p )+(1-β)(σ p→ss→p );

第一项μ(包括μp→s和μs→p))测量了笔刷与每个簇在位置上的匹配程度,第二项(包括σp→s和和σs→p)表示每个簇与笔刷在分布上的匹配能力,引入β参数来平衡这两种能力。μp→s表示从采样点到候选中的笔刷的平均最近距离,μs→p表示从候选中的笔刷到采样点的平均最近距离,σp→s表示从采样点到笔刷的最近距离的标准偏差,σs→p表示从笔刷到采样点的最近距离的标准偏差。在的实践中,将β的值调至0.5,取得了良好的效果。The first term μ (including μ p→s and μ s→p ) measures how well the brush matches each cluster in position, and the second term (including σ p→s and σ s→p ) represents each The ability to match the distribution of each cluster and the brush, the β parameter is introduced to balance these two abilities. μ p→s represents the average closest distance from the sampling point to the brush in the candidate, μ s→p represents the average closest distance from the brush in the candidate to the sampling point, σ p→s represents the distance from the sampling point to the brush The standard deviation of the closest distance, σ s→p represents the standard deviation of the closest distance from the brush to the sampling point. In practice, adjusting the value of β to 0.5 has achieved good results.

对于图4每一个簇而言,本发明都对其进行误差匹配,从误差分数中反映了这些候选簇与笔刷之间的匹配程度,但是在这些结果中,大部分都不是具备完整轮廓的元素,因此并不能以此作为用户选择哪一个候选簇的标准。For each cluster in Figure 4, the present invention performs error matching on it, and the matching degree between these candidate clusters and brushes is reflected from the error score, but most of these results do not have complete contours element, so it cannot be used as the criterion for which candidate cluster the user selects.

步骤S30、将聚类结果自上而下扩散到每个超像素上,为每个聚类结果分配不同的权重信息,得到每个超像素的匹配误差即前景概率。Step S30: Spread the clustering results to each superpixel from top to bottom, assign different weight information to each clustering result, and obtain the matching error of each superpixel, that is, the foreground probability.

由于这些像素块之间的关系是未知的,每个像素块最终都将属于前景或者背景。在超像素的基础上,依靠简单的交互如何获取得到更多关于前景的信息尤其重要,并且和背景尽可能的区分开,上述所指的前景是包含多个相似信息的重复物体。需要的是具有前景特征的目标,而目标是超像素的集合。这些集合具有一些颜色,纹理上的共性,有相似颜色的超像素应该聚集在一起,形成有语义信息的重复元素。如图6右图所示,以超像素为单位,预测它们属于前景的概率。Since the relationship between these pixel blocks is unknown, each pixel block will eventually belong to the foreground or background. On the basis of superpixels, it is particularly important to rely on simple interactions to obtain more information about the foreground, and to distinguish it from the background as much as possible. The aforementioned foreground refers to repeated objects containing multiple similar information. What is needed is a target with foreground features, and the target is a collection of superpixels. These sets share some commonalities in color, texture, and superpixels with similar colors should cluster together to form repeating elements with semantic information. As shown in the right panel of Fig. 6, in units of superpixels, the probability that they belong to the foreground is predicted.

假设每个超像素都具备前景的表现能力,这个能力越强,则代表成为前景的概率越大。据此提出了一个基于超像素的衡量前景表现力的评价体系,根据该评估结果预测包含重复元素的前景。根据用户输入的笔刷区域,分析哪些区域是可能的前景目标。根据超像素之间的关系进行合并,并未考虑用户输入的笔画信息是不足以得到最符合用户期望的重复元素。认为用户输入的笔画有其特定的用意,因此对聚类的结果(簇)结合用户的输入进行分析,从而得到所需要的初步前景目标。本发明对每个簇与笔刷做了匹配误差计算,将此聚类的结果,自上而下扩散到每个超像素上,得到原始每个超像素的匹配误差。Assuming that each superpixel has the ability to express the foreground, the stronger the ability, the greater the probability of becoming a foreground. Accordingly, an evaluation system based on superpixels to measure foreground expressiveness is proposed, and foregrounds containing repeated elements are predicted according to the evaluation results. Based on the brush area entered by the user, analyze which areas are possible foreground targets. The combination is performed according to the relationship between superpixels, without considering that the stroke information input by the user is not enough to obtain the repeated elements that best meet the user's expectations. It is believed that the strokes input by the user have their specific intentions, so the clustering results (clusters) are analyzed in combination with the user's input, so as to obtain the required preliminary foreground targets. The invention calculates the matching error for each cluster and the brush, and spreads the result of the clustering to each superpixel from top to bottom to obtain the original matching error of each superpixel.

对这些聚类结果进行更进一步的分析,发现聚类过程进行到越接近结束时,该结果的参考值越高,因此为每个聚类结果分配不同的权重信息。如果仅组合两个超像素,则应将它们的参考权重调整为较低值。使用每个结果中包括的像素总数作为权重值。基于超像素分割的基础上,从每个超像素中均匀采样一些固定数量的点作为代表,新组成的簇就由它内部所包含的所有采样点构成。最终将所有的超像素分数归一化到[0,255],分数接近255就越接近前景。These clustering results are further analyzed, and it is found that the closer the clustering process is to the end, the higher the reference value of the result, so different weight information is assigned to each clustering result. If only two superpixels are combined, their reference weights should be adjusted to lower values. Use the total number of pixels included in each result as the weight value. On the basis of superpixel segmentation, a fixed number of points are uniformly sampled from each superpixel as representatives, and the newly formed cluster is composed of all the sampling points contained in it. Finally, all superpixel scores are normalized to [0, 255], the closer the score is to 255, the closer it is to the foreground.

对于图像中的每个超像素,它是否匹配用户的输入决定了它的前景表现力,即属于前景的概率。计算每个簇中的超像素的前景表现力,在初始阶段大致无太大差异,后续可能有两种情况:For each superpixel in the image, whether it matches the user's input determines its foreground representation, i.e. the probability of belonging to the foreground. The foreground expressiveness of the superpixels in each cluster is calculated, and there is not much difference in the initial stage. There may be two situations in the follow-up:

a、在合并过程中存在超像素逐渐与背景区域归并,匹配误差逐渐增大;a. During the merging process, there are superpixels gradually merged with the background area, and the matching error gradually increases;

b、在合并过程中存在超像素逐渐与前景区域归并,匹配误差呈现减小趋势;b. During the merging process, superpixels are gradually merged with the foreground area, and the matching error shows a decreasing trend;

通过前面的公式获得第i个簇的前景参考值Ri,即Ri第i个簇的前景参考值,wik是指包含第k个超像素的第i个簇的权重,这里权重取值为第i个簇的像素总数。n的值对于每个超像素不是固定的。最后,为了统一结果,将所有超像素表现力分数归一化为[0,255],并且接近255的分数接近前景,而0接近背景。The foreground reference value R i of the i-th cluster is obtained through the previous formula, that is, the foreground reference value of the i-th cluster of R i , and w ik refers to the weight of the i-th cluster containing the k-th superpixel, where the weight takes the value is the total number of pixels in the ith cluster. The value of n is not fixed for each superpixel. Finally, to unify the results, all superpixel expressiveness scores are normalized to [0, 255], and scores close to 255 are close to the foreground, while 0 is close to the background.

步骤S40、将前景超像素颜色信息作为数据输入,以超像素之间的颜色相似度作为相似矩阵,通过不断迭代的方式筛选出所有样本中能够成为代表样本的数据。In step S40, the foreground superpixel color information is used as data input, and the color similarity between the superpixels is used as a similarity matrix, and data that can become a representative sample in all samples is screened out through continuous iteration.

本发明的算法考虑了用户笔刷的位置对最终重复元素的影响因素,在笔刷划过的范围内仅仅存在一组重复元素时,已经能够很好的提取分布于不同位置的重复元素,但如果超过两组重复元素的情况如图7所示,算法还不够鲁棒,在计算机看来,筛选出的这些重复元素,都是用户所期望的。必须对这些重复元素再归类,才能从中得到对应的重复元素,而不是令计算机陷入模糊的状态。The algorithm of the present invention takes into account the influence factors of the position of the user's brush on the final repeated elements. When there is only a group of repeated elements within the range of the brush stroke, the repeated elements distributed in different positions can be well extracted. If there are more than two sets of repeated elements, as shown in Figure 7, the algorithm is not robust enough. From the computer's point of view, the selected repeated elements are all expected by the user. These repeating elements must be reclassified in order to obtain the corresponding repeating elements, rather than leaving the computer in a fuzzy state.

在此已经得到了大部分前景区域,还需要更精细的划分前景,总体来说是基于划分的聚类问题。首先想到的是经典的k均值聚类算法,但该算法需要提前设置聚类个数,本发明中这部分是一个未知数,在用户进行交互之前没有人知道最终有多少个重复元素的种类,另外,该算法在庞大的数据集上容易陷入局部最优的困境,更加适用于简单的特征分类,并不适用多维度的特征情况。新的仿射传播聚类算法,与k均值聚类相比,具有更高的鲁棒性,准确性也有了提高。Most of the foreground regions have been obtained here, and more finely divided foregrounds are needed. Generally speaking, it is a clustering problem based on partitioning. The first thing that comes to mind is the classic k-means clustering algorithm, but this algorithm needs to set the number of clusters in advance. This part of the present invention is an unknown number. No one knows how many types of repeated elements there are in the end before the user interacts. , the algorithm is easy to fall into the dilemma of local optimality on huge data sets, and is more suitable for simple feature classification, but not for multi-dimensional features. The new affine propagation clustering algorithm, compared with k-means clustering, has higher robustness and improved accuracy.

基于信息传播的机制,从所有数据中找到最能够代表自身的样例点,在这可以看成是与自身最为相似的样本,通过两个方面来衡量它们之间的匹配度,并且同时选择出具有代表性的样本。Based on the mechanism of information dissemination, find the most representative sample point from all the data, which can be regarded as the most similar sample to itself, measure the matching degree between them through two aspects, and select the sample point at the same time. representative sample.

第一,吸引度信息(r矩阵),如图8左图所示,表达某个数据是否适合作为其他数据对象的样例点,也就是能够代表其他的数据点;第二,归属度信息(a矩阵),如图8右图所示,代表每个样本选择其他所有的样本作为自己聚类中心样例点的契合程度。First, the attractiveness information (r matrix), as shown in the left figure of Figure 8, expresses whether a certain data is suitable as a sample point for other data objects, that is, it can represent other data points; secondly, the attribution information ( a matrix), as shown in the right figure of Figure 8, represents the degree of fit of each sample selecting all other samples as its own cluster center sample points.

如图8所示,每一次迭代计算每个样本之间的吸引度和归属度信息r(i,k)和a(i,k),计算公式如下:As shown in Figure 8, each iteration calculates the attractiveness and attribution information r(i,k) and a(i,k) between each sample. The calculation formula is as follows:

a(i,k)←min[0,r(k,k)+∑i′∈{i,k}max[0,r(i′,k)]];a(i,k)←min[0,r(k,k)+∑ i′∈{i,k} max[0,r(i′,k)]];

其中,i表示样本,i′表示其他样本,k表示候选样本中心,k′表示其他候选样本中心,s(i,k)和s(i,k′)表示不同候选样本中心的相似度矩阵,r(k,k)反映的是节点k有多不适合被划分到其他聚类中心,r(i′,k)表示除了r(i,k)和r(k,k)以外的值,代表节点k对其他节点的吸引度。Among them, i represents the sample, i' represents other samples, k represents the candidate sample center, k' represents other candidate sample centers, s(i, k) and s(i, k') represent the similarity matrix of different candidate sample centers, r(k, k) reflects how unsuitable node k is to be divided into other cluster centers, r(i', k) represents values other than r(i, k) and r(k, k), representing The attractiveness of node k to other nodes.

经过每一轮计算之后,为了避免归属度信息和吸引度信息发生数值震荡,添加阻尼系数,将每一次的结果与上一次的结果线性组合,替代为新的信息值,由此可以看出,在不断地迭代计算之后,这两种信息值是由第一次计算开始累计到最后一次的结果,而不会由某一次结果决定最终得到收敛的样本中心。该算法的时间复杂度为O(K2T),K代表样本个数,由于该算法无需预先进行设置聚类个数,而是通过不断迭代的方式筛选出所有样本中能够成为代表样本的数据,在本发明的算法中,将前景超像素颜色信息作为数据输入,以超像素之间的颜色相似度作为相似矩阵,以T为最大迭代次数。After each round of calculation, in order to avoid the numerical oscillation of the attribution information and the attractiveness information, a damping coefficient is added, and the result of each time is linearly combined with the previous result, and replaced with a new information value. It can be seen that, After continuous iterative calculation, these two information values are accumulated from the first calculation to the last result, and the final convergent sample center will not be determined by a certain result. The time complexity of the algorithm is O(K 2 T), and K represents the number of samples. Because the algorithm does not need to set the number of clusters in advance, it filters out the data that can become representative samples from all samples through continuous iteration. , in the algorithm of the present invention, the foreground superpixel color information is used as the data input, the color similarity between the superpixels is used as the similarity matrix, and T is the maximum number of iterations.

进一步地,通过使用用户评估来对本发明的方法以及著名的迭代图割的前景提取算法和懒人分割算法进行对比,用于评估本发明的方法的交互的友好程度。Further, the method of the present invention and the well-known iterative graph cut foreground extraction algorithm and the lazy person segmentation algorithm are compared by using user evaluation to evaluate the interactive friendliness of the method of the present invention.

首先,需要选定用户评估的评估指标,使用正确率和耗费时间两个指标作为评估指标,正确率是给定的正确提取结果和重复元素提取方法对某张图进行了提取之后的结果的交集与该结果的比,该指标衡量的是提取结果的精确程度。耗费时间是用户开始进行第一次操作到进行完最后一次操作的时间间隔,因为进行用户评估的三个算法的运行时间只占总体耗费时间用户的很小一部分,因此可以把耗费时间认为是用户的操作时间以及思考时间的和,该指标衡量的是算法交互的友好程度。First of all, it is necessary to select the evaluation indicators for user evaluation, and use the two indicators of accuracy and time as evaluation indicators. The accuracy is the intersection of the given correct extraction result and the result of extracting a certain image by the repeated element extraction method. Ratio to this result, which measures how precise the extraction result is. The time-consuming is the time interval from the user's first operation to the last operation, because the running time of the three algorithms for user evaluation is only a small part of the total time-consuming user, so the time-consuming user can be regarded as the user. The sum of the operating time and the thinking time, which measures the friendliness of the algorithm interaction.

然后,假设本发明的方法在正确率和另外两个方法相当,同时耗费时间上少于另外两个方法。为了证明该假设,需要设计了如下的用户评估实验,首先使用Photoshop人工准确地分割出给定的图片的重复元素作为正确的提取结果,接着让用户进行操作,当算法的结果的正确率超过给定阈值的时候,停止实验,记录耗费时间。Then, it is assumed that the method of the present invention is comparable in accuracy to the other two methods, and consumes less time than the other two methods. In order to prove this hypothesis, the following user evaluation experiments need to be designed. First, Photoshop is used to manually and accurately segment the repeated elements of a given image as the correct extraction result, and then users are allowed to operate. When the correct rate of the algorithm results exceeds the given When the threshold is set, the experiment is stopped and the recording is time-consuming.

接着,本发明需要选定实验图片,根据上述简要的实验方案,需要选择耗费时间长的图片,即图片中重复元素较多且复杂的图片,因为如果图片中的重复元素过于明显以及简单,则耗费时间会很短,造成无法体现这几个算法的差异。Next, the present invention needs to select an experimental picture. According to the above-mentioned brief experimental scheme, it is necessary to select a picture that takes a long time, that is, a picture with many repeated elements in the picture and complex pictures, because if the repeated elements in the picture are too obvious and simple, then The time-consuming will be very short, resulting in the inability to reflect the differences between these algorithms.

最后,根据上述内容设计出了详细的用户评估方案:选择四张图片作为实验任务,以及第二行分别对应的二值图作为正确的提取结果,如图9所示。将上述三个算法集成到一个程序当中,每个用户需要进行三组操作,分别为使用上述三个算法分别对图9中的四组子图进行重复元素提取,在实验过程中,算法的顺序是随机的。本发明的方法有两步操作,分别为调整画笔大小和用画笔在图上画线,迭代图割算法有三种操作,分别为使用鼠标左键点击并拉出一个绿色的矩形框和Shift键+鼠标左键画线和Ctrl键+鼠标左键画线,懒人分割有两种交互操作,分别为鼠标左键画线和鼠标右键画线。总共招募了20位参与者来进行用户评估,每个人需要完成3组操作×4张图片,总共有240次实验。在整个实验中,记录了以下信息用于定量分析,每张图像重复元素提取时间,用户交互的操作数,或称为笔画数,以及分割的准确率。Finally, a detailed user evaluation scheme is designed according to the above contents: four images are selected as the experimental task, and the corresponding binary images in the second row are used as the correct extraction results, as shown in Figure 9. Integrating the above three algorithms into one program, each user needs to perform three sets of operations, which are to use the above three algorithms to extract repeated elements from the four sets of subgraphs in Figure 9. During the experiment, the sequence of the algorithms is random. The method of the present invention has two-step operations, namely adjusting the size of the brush and drawing lines on the graph with the brush, and the iterative graph cutting algorithm has three operations, respectively using the left mouse button to click and pull out a green rectangular box and the Shift key + There are two kinds of interactive operations for the lazy split, namely, the left mouse button to draw the line and the right mouse button to draw the line. A total of 20 participants were recruited for user evaluation, and each person was required to complete 3 sets of actions × 4 images, for a total of 240 experiments. Throughout the experiment, the following information was recorded for quantitative analysis, the extraction time of repeated elements per image, the number of operations of user interaction, or stroke count, and the accuracy of segmentation.

任务一:首先展示待分割的图像以及人工标定的重复元素标准图,用户分别利用三种不同的交互方式进行重复元素提取。以达到固定正确率为止,每个用户所花费的时间作为记录指标。如图10所示,分别是对图9中四张图像中的指定重复元素进行提取统计的时间内容。图10中的第一组数据,是综合所有实验所得三种方法的平均消耗时间。该任务本发明从志愿者从抽取一半的人数,进行指定操作,忽略极端情况数据,记录代表性数据并绘制图10中的对比表格。Task 1: First, display the image to be segmented and the standard image of repeated elements calibrated manually. The user uses three different interactive methods to extract repeated elements. The time spent by each user is used as a record indicator until a fixed correct rate is reached. As shown in FIG. 10 , respectively are the time contents of the extraction statistics for the specified repeated elements in the four images in FIG. 9 . The first set of data in Figure 10 is the average time consumption of the three methods obtained by combining all experiments. For this task, the present invention selects half the number of volunteers, performs specified operations, ignores extreme case data, records representative data, and draws the comparison table in FIG. 10 .

三种方法中有的交互时间相差较大,从整体来看,本发明的综合平均耗费时间最短。尤其是在图9中的a和b中,通过重复测量方差分析(ANOVA)发现,本发明完成提取工作的时间(F=97.735,p<3.74E-08)明显优于其他方法,而在后面两组图像中,差异略微缩短(p<0.019)。The interaction time of some of the three methods is quite different. From the overall point of view, the comprehensive average consumption time of the present invention is the shortest. Especially in a and b in Figure 9, through repeated measures analysis of variance (ANOVA), it is found that the time to complete the extraction work (F=97.735, p<3.74E-08) of the present invention is significantly better than other methods, while the latter In both sets of images, the difference was slightly shortened (p<0.019).

从这些待分割的图像中不难看出,在更复杂的背景中,本发明算法更具优势。从上述结果来看,迭代图割的算法费时最久,远远落后于本发明的方法和懒人分割算法。使用该算法最原始的矩形框标定重复元素是不适合的,虽然在此阶段,他们是根据用户给定的前景和背景像素进行建模,重复元素拥有相同的像素性质,但仅仅根据用户给定的前景和背景图像仍然无法一次性分割出重复元素。这也从另一方面反映出,仅仅根据特定的一些前景像素特征是无法提取到重复元素的,尽管这个算法在十分简单的情况下,也能够得到不错的结果。本发明方法与懒人分割结果相比,本发明方法的最大值和中位数均小于懒人分割,说明本发明方法的平均耗费时间要少于懒人分割。从统计的数据结果和误差线来看,本发明方法远胜于迭代图割的算法,与懒人分割相比,中位数要更小,并且数据呈现在小范围内波动,说明本发明方法的不仅平均耗费时间要少于懒人分割,而且比懒人分割的表现更为稳定。It is not difficult to see from these images to be segmented that in a more complex background, the algorithm of the present invention is more advantageous. From the above results, the iterative graph cut algorithm takes the longest time, far behind the method of the present invention and the lazy segmentation algorithm. It is not suitable to use the most primitive rectangular box of the algorithm to demarcate repeated elements, although at this stage, they are modeled according to the foreground and background pixels given by the user. The foreground and background images are still unable to segment the repeating elements in one go. This also reflects, on the other hand, that it is impossible to extract repeated elements only based on some specific foreground pixel features, although this algorithm can get good results in very simple cases. Compared with the lazy person segmentation result of the method of the present invention, the maximum value and the median of the method of the present invention are both smaller than those of the lazy person segmentation, indicating that the average time consumption of the method of the present invention is less than that of the lazy person segmentation. From the statistical data results and error bars, the method of the present invention is far superior to the iterative graph cut algorithm. Compared with the lazy segmentation, the median is smaller, and the data fluctuates in a small range, which illustrates the method of the present invention. Not only is the average time-consuming less than the lazy segmentation, but also the performance is more stable than the lazy segmentation.

任务二:在限定时间内,根据提示的图像以及标定好的重复元素标准图,快速分割图像中的重复元素,随后对比分割图与标准图,统计分割准确率。如图11所示,请一些志愿者对四组图像进行分割的准确率结果。Task 2: Within a limited time, according to the prompted image and the calibrated standard image of repeated elements, quickly segment the repeated elements in the image, and then compare the segmentation image and the standard image to calculate the segmentation accuracy. As shown in Figure 11, the accuracy results of four groups of images are segmented by some volunteers.

观察方差分析的结果,这三种算法的平均准确率发现,本发明的算法在上述实验中优于迭代图割算法(F=22.04,p<0.0002),与懒人分割算法没有分别。尤其在图9中的b中能够快速提取出蓝色药丸或者白色药片(实际上用蓝色表示药丸,白色表示药片),比其他方法费时更少(F=35.86,p<8.66E-06),准确率更高(F=574.6962,p<4.77E-11)。Observing the results of variance analysis, it is found that the average accuracy of these three algorithms shows that the algorithm of the present invention is superior to the iterative graph cut algorithm (F=22.04, p<0.0002) in the above experiments, and is no different from the lazy segmentation algorithm. Especially in b in Figure 9, blue pills or white pills can be quickly extracted (in fact, the pills are represented by blue, and the tablets are represented by white), which is less time-consuming than other methods (F=35.86, p<8.66E-06) , with higher accuracy (F=574.6962, p<4.77E-11).

本发明并非直接统计分割正确的像素点在标准图中所占的比例,这是由于考虑到,分割结果可能出现超出标准图甚至完全覆盖了标准图的情况。因此先用最终分割的结果图与标准图对比,获取两者之间的覆盖区域面积,将此区域面积在分割结果图所占的比例作为准确率。由于复杂的交互,迭代图割算法在限定时间的情况下已经毫无优势,准确率平均值在60%以下,图像越复杂,该算法的准确率越低。而懒人分割算法,总体与本发明的方法保持一致的高准确率,第一组实验中,本发明方法显著性略低于其他几组图(p<0.00056),该图虽重复元素穿插分布,但细心的用户使用其他方法逐个像素在前景上绘制也能够达到不错的效果。在其他组的实验图中,通过显著性检验可知本发明的算法显著优异,准确率更高(p<0.0002),总体来说,本发明的算法差异体现在两个原因,首先,第一组分割难度低,而第二,三,四组数据中的图像分割难度大;其次,该实验做了限定时间,本发明的算法只需要一个连续的线条即可完成分割,时间充裕时,即使用户多次做重复元素提取工作,准确率也大致维持在一个水平。而懒人分割算法,是逐渐优化的过程,时间越多,反而能够提升准确率。The present invention does not directly count the proportion of correctly segmented pixels in the standard image, because it is considered that the segmentation result may exceed the standard image or even completely cover the standard image. Therefore, first compare the final segmentation result image with the standard image, obtain the coverage area between the two, and use the proportion of this area in the segmentation result image as the accuracy rate. Due to the complex interaction, the iterative graph cut algorithm has no advantage in the case of limited time, and the average accuracy rate is below 60%. The more complex the image, the lower the accuracy of the algorithm. The lazy segmentation algorithm, on the whole, maintains a high accuracy rate consistent with the method of the present invention. In the first group of experiments, the method of the present invention is slightly less significant than other groups of graphs (p<0.00056). Although the graph repeats the elements interspersed and distributed , but careful users can also achieve good results using other methods of drawing on the foreground pixel by pixel. In the experimental graphs of other groups, it can be seen from the significance test that the algorithm of the present invention is significantly superior and the accuracy rate is higher (p<0.0002). Generally speaking, the differences of the algorithm of the present invention are reflected in two reasons. First, the first group The segmentation difficulty is low, while the image segmentation in the second, third and fourth groups of data is more difficult; secondly, the experiment has limited time, the algorithm of the present invention only needs one continuous line to complete the segmentation, when the time is sufficient, even if the user Repeated element extraction is done many times, and the accuracy rate is roughly maintained at the same level. The lazy person segmentation algorithm is a process of gradual optimization. The more time, the more accurate it can be.

完成上述两个任务的同时,也对每个步骤用户输入的笔画数进行了统计,结果如图12所示,从图12可以看出,本发明算法对用户输入的要求最低,结合准确率来看,本发明的算法优于其他两种方法。本发明通过F检验分析本发明算法的在操作数上是否有显著差异,发现参与调研的志愿者的平均操作数更少(F=27.6477,p<3.249E-06)。首先,迭代图割算法与本发明方法的交互都比懒人分割算法更加简洁,输入的笔画数更少,但是迭代图割算法在同等情况下却达不到较高的准确率;其次,从准确率观察,懒人分割与本发明算法都能较准确的分割出重复元素,但相比而言,懒人分割需要更多用户的输入信息,或者是用户需要仔细思考如何从复杂背景中提取重复元素,浪费了一定的时间。综上,对比三种方法,本发明提出的基于简单交互的重复元素提取更有优势。While completing the above two tasks, the number of strokes input by the user in each step was also counted. The results are shown in Figure 12. As can be seen from Figure 12, the algorithm of the present invention has the lowest requirements for user input, and is based on the accuracy rate. See, the algorithm of the present invention outperforms the other two methods. The present invention analyzes whether there is a significant difference in the number of operations of the algorithm of the present invention through F test, and it is found that the average number of operations of the volunteers participating in the survey is less (F=27.6477, p<3.249E-06). First, the interaction between the iterative graph-cut algorithm and the method of the present invention is more concise than the lazy segmentation algorithm, and the number of input strokes is less, but the iterative graph-cut algorithm cannot achieve a higher accuracy under the same conditions; secondly, from the Accuracy observation, lazy segmentation and the algorithm of the present invention can segment duplicate elements more accurately, but in comparison, lazy segmentation requires more input information from users, or users need to think carefully about how to extract from complex backgrounds Repeating elements and wasting a certain amount of time. To sum up, comparing the three methods, the simple interaction-based repetitive element extraction proposed by the present invention has more advantages.

进一步地,本发明能够快速编辑这些重复元素,从颜色,亮度等方面修改它们的属性,形成新的物体,无需对每一个重复物体进行编辑,例如图13中的重复元素(例如椅子、碗碟、气球、柠檬等)。更进一步,获取这些重复元素的几何特征,如形状轮廓信息,进行自定义变形,进而对所有重复元素进行几何编辑。Further, the present invention can quickly edit these repeated elements, modify their properties from color, brightness, etc., to form new objects, without editing each repeated object, such as the repeated elements in Figure 13 (such as chairs, dishes, dishes, etc.) , balloons, lemons, etc.). Going a step further, obtain the geometric features of these repeating elements, such as shape outline information, perform custom deformation, and then perform geometric editing on all repeating elements.

本发明提出的快速提取图像中的重复元素的方法,与其他重复元素检测方法相比具有以下有优点:Compared with other repeated element detection methods, the method for rapidly extracting repeated elements in an image proposed by the present invention has the following advantages:

(1)与其他方法基于单个像素的单位不同,本发明选择了速度更快的超像素作为整体检测时的最小单位,对每个超像素进行前景和背景的分类;(1) Different from other methods based on the unit of a single pixel, the present invention selects a faster superpixel as the smallest unit during overall detection, and performs foreground and background classification on each superpixel;

(2)现有技术中对于重复元素的检测,往往需要手工一笔一笔画出前景和背景区域,输入过于繁琐,本发明中,用户只需要一次粗略的选择重复元素区域,便可得到对应的重复元素前景;(2) For the detection of repetitive elements in the prior art, it is often necessary to manually draw the foreground and background areas by stroke, and the input is too cumbersome. In the present invention, the user only needs to roughly select the repetitive element area once to obtain the corresponding repeating element foreground;

(3)本发明能够解决在笔刷覆盖区域存在多组重复元素的情况,仍按照重复元素的相似性原则分类,每次只分割出一组重复元素,如存在多组,则作为候选结果,可供用户选择。(3) The present invention can solve the situation that there are multiple groups of repeated elements in the brush coverage area, still classify according to the principle of similarity of repeated elements, each time only one group of repeated elements is divided, if there are multiple groups, then as a candidate result, available to the user for selection.

本发明提出的基于交互的重复元素提取技术,无需用户指定模板,不需要提前预知这些物体的任何信息,就能分割出重复元素。通过用户类似画刷的鼠标操作,提取出用户想要的一组重复元素。本发明的工作是通过交互的方式对图像中重复元素的提取,无需通过用户来指定重复元素模板,具有以下几个优点:The interaction-based repetitive element extraction technology proposed by the present invention can segment repetitive elements without the need for a user to specify a template or any information of these objects to be predicted in advance. Through the user's brush-like mouse operation, a set of repeated elements that the user wants is extracted. The work of the present invention is to extract the repeated elements in the image in an interactive way, without the need to specify the repeated element template by the user, and has the following advantages:

(1)提出了一个从局部选择、整体扩展的交互式操作方法。借助一个类似于笔刷的鼠标操作,而不是从整个场景中检测或匹配重复元素,就能从中确定出一组相似的元素,涉及的交互少,计算量小,但更高效(1) An interactive operation method from local selection and global expansion is proposed. With a single brush-like mouse action, rather than detecting or matching repeating elements from the entire scene, a set of similar elements can be identified from it, involving less interaction and less computation, but more efficient

(2)提出了一种重复元素分割方法。该方法所分割的重复元素充分考虑和挖掘了外观上相似性,其提取结果和人眼主观的选择的结果十分接近,算法更为智能。(2) A method for segmenting repeated elements is proposed. The repeated elements segmented by this method fully considers and mines the similarity in appearance, and the extraction result is very close to the subjective selection result of the human eye, and the algorithm is more intelligent.

(3)提出了一种实时选择反馈机制,该机制分为两步,即选择和反馈,当用户进行选择时,该算法提供实时反馈,减少用户等待时间,促进用户更加友好的使用。(3) A real-time selection feedback mechanism is proposed, which is divided into two steps, namely selection and feedback. When the user makes a selection, the algorithm provides real-time feedback, reduces the user's waiting time, and promotes more user-friendly use.

(4)提出了一种跨区域、跨语义目标的重复物体分割方法,本方法可以在含有多组重复物体的复杂情况下,依然能够从中分割出特定的重复元素,而其他方法都未考虑到在同一张图像中同时存在多组重复元素的情况。(4) A cross-regional and cross-semantic object segmentation method is proposed. This method can still segment specific repeated elements from multiple groups of repeated objects in a complex situation, while other methods have not considered A situation where multiple sets of repeating elements exist simultaneously in the same image.

进一步地,如图14所示,基于上述图像重复元素分割方法,本发明还相应提供了一种智能设备,所述智能设备包括处理器10、存储器20及显示器30。图14仅示出了智能设备的部分组件,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。Further, as shown in FIG. 14 , based on the above-mentioned method for segmenting repeated elements of an image, the present invention also provides a corresponding smart device, where the smart device includes a processor 10 , a memory 20 and a display 30 . FIG. 14 shows only some components of the smart device, but it should be understood that implementation of all of the illustrated components is not required, and more or less components may be implemented instead.

所述存储器20在一些实施例中可以是所述智能设备的内部存储单元,例如智能设备的硬盘或内存。所述存储器20在另一些实施例中也可以是所述智能设备的外部存储设备,例如所述智能设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器20还可以既包括所述智能设备的内部存储单元也包括外部存储设备。所述存储器20用于存储安装于所述智能设备的应用软件及各类数据,例如所述安装智能设备的程序代码等。所述存储器20还可以用于暂时地存储已经输出或者将要输出的数据。在一实施例中,存储器20上存储有图像重复元素分割程序40,该图像重复元素分割程序40可被处理器10所执行,从而实现本申请中图像重复元素分割方法。In some embodiments, the memory 20 may be an internal storage unit of the smart device, such as a hard disk or a memory of the smart device. In other embodiments, the memory 20 may also be an external storage device of the smart device, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure) digital device equipped on the smart device. Digital, SD) card, flash memory card (Flash Card), etc. Further, the memory 20 may also include both an internal storage unit of the smart device and an external storage device. The memory 20 is used to store application software and various types of data installed on the smart device, such as program codes for the smart device installed. The memory 20 can also be used to temporarily store data that has been output or is to be output. In one embodiment, the memory 20 stores an image repeating element segmentation program 40, and the image repeating element segmentation program 40 can be executed by the processor 10, thereby implementing the image repeating element segmentation method in the present application.

所述处理器10在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行所述存储器20中存储的程序代码或处理数据,例如执行所述图像重复元素分割方法等。In some embodiments, the processor 10 may be a central processing unit (Central Processing Unit, CPU), a microprocessor or other data processing chips, which are used to execute program codes or process data stored in the memory 20, such as Execute the image repeating element segmentation method and the like.

所述显示器30在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。所述显示器30用于显示在所述智能设备的信息以及用于显示可视化的用户界面。所述智能设备的部件10-30通过系统总线相互通信。In some embodiments, the display 30 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, and the like. The display 30 is used for displaying information on the smart device and for displaying a visual user interface. The components 10-30 of the smart device communicate with each other via the system bus.

在一实施例中,当处理器10执行所述存储器20中图像重复元素分割程序40时实现以下步骤:In one embodiment, when the processor 10 executes the image repeating element segmentation program 40 in the memory 20, the following steps are implemented:

读入图像,经过线性迭代聚类对图像进行预处理分组,在相邻像素之间形成具有相似纹理,颜色以及亮度的像素块;Read in the image, preprocess and group the image through linear iterative clustering, and form pixel blocks with similar texture, color and brightness between adjacent pixels;

图像形成超像素后,生成前景区域,在聚类的过程中,根据涵盖了超像素的颜色距离,空间距离以及超像素到用户输入的路径中心的距离来进行超像素合并,形成连通区域;After the image is formed into superpixels, a foreground area is generated. In the process of clustering, superpixels are merged according to the color distance, spatial distance and distance from the superpixel to the center of the path input by the user, forming a connected area;

将聚类结果自上而下扩散到每个超像素上,为每个聚类结果分配不同的权重信息,得到每个超像素的匹配误差即前景概率;Spread the clustering results to each superpixel from top to bottom, assign different weight information to each clustering result, and obtain the matching error of each superpixel, that is, the foreground probability;

将前景超像素颜色信息作为数据输入,以超像素之间的颜色相似度作为相似矩阵,通过不断迭代的方式筛选出所有样本中能够成为代表样本的数据。The foreground superpixel color information is used as data input, and the color similarity between superpixels is used as a similarity matrix, and the data that can become representative samples in all samples are screened out through continuous iteration.

本发明还提供一种存储介质,其中,所述存储介质存储有图像重复元素分割程序,所述图像重复元素分割程序被处理器执行时实现如上所述的图像重复元素分割方法的步骤。The present invention also provides a storage medium, wherein the storage medium stores an image repeating element segmentation program, and the image repeating element segmentation program is executed by a processor to implement the steps of the above-mentioned image repeating element segmentation method.

综上所述,本发明提供一种图像重复元素分割方法、智能设备及存储介质,所述方法包括:读入图像,经过线性迭代聚类对图像进行预处理分组,在相邻像素之间形成具有相似纹理,颜色以及亮度的像素块;图像形成超像素后,生成前景区域,在聚类的过程中,根据涵盖了超像素的颜色距离,空间距离以及超像素到用户输入的路径中心的距离来进行超像素合并,形成连通区域;将聚类结果自上而下扩散到每个超像素上,为每个聚类结果分配不同的权重信息,得到每个超像素的匹配误差即前景概率;将前景超像素颜色信息作为数据输入,以超像素之间的颜色相似度作为相似矩阵,通过不断迭代的方式筛选出所有样本中能够成为代表样本的数据。本发明通过简单的交互快速分割出图像中的重复元素,给定一幅包含有重复元素的彩色图像,用户简单粗略的拖动笔刷,从中迅速分割出用户所期望的重复元素。To sum up, the present invention provides a method for segmenting repeated elements of an image, an intelligent device and a storage medium. The method includes: reading an image, preprocessing and grouping the image through linear iterative clustering, and forming an image between adjacent pixels. Pixel blocks with similar texture, color and brightness; after the image is formed into superpixels, the foreground area is generated. In the process of clustering, according to the color distance, spatial distance and the distance from the superpixel to the center of the path input by the user. to merge superpixels to form connected regions; spread the clustering results to each superpixel from top to bottom, assign different weight information to each clustering result, and obtain the matching error of each superpixel, that is, the foreground probability; The foreground superpixel color information is used as data input, and the color similarity between superpixels is used as a similarity matrix, and the data that can become representative samples in all samples are screened out through continuous iteration. The present invention quickly divides the repeated elements in the image through simple interaction. Given a color image containing repeated elements, the user simply and roughly drags the brush to quickly segment the repeated elements expected by the user.

当然,本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关硬件(如处理器,控制器等)来完成,所述的程序可存储于一计算机可读取的存储介质中,所述程序在执行时可包括如上述各方法实施例的流程。其中所述的存储介质可为存储器、磁碟、光盘等。Of course, those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented by instructing relevant hardware (such as processors, controllers, etc.) through a computer program, and the programs can be stored in a In a computer-readable storage medium, when the program is executed, it may include the processes of the foregoing method embodiments. The storage medium may be a memory, a magnetic disk, an optical disk, or the like.

应当理解的是,本发明的应用不限于上述的举例,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,所有这些改进和变换都应属于本发明所附权利要求的保护范围。It should be understood that the application of the present invention is not limited to the above examples. For those of ordinary skill in the art, improvements or transformations can be made according to the above descriptions, and all these improvements and transformations should belong to the protection scope of the appended claims of the present invention.

Claims (10)

1. a kind of image repeat element dividing method, which is characterized in that described image repeat element dividing method includes:
Image is read in, pretreatment grouping is carried out to image by linear iteraction cluster, is formed between adjacent pixels with similar The block of pixels of texture, color and brightness;
After image forms super-pixel, foreground area is generated, during cluster, according to the color distance for covering super-pixel, The distance for the path center that space length and super-pixel are inputted to user carries out super-pixel merging, forms connected region;
Cluster result is diffused into from top to bottom in each super-pixel, different weight informations is distributed for each cluster result, obtains To matching error, that is, prospect probability of each super-pixel;
It inputs prospect super-pixel colouring information as data, using the color similarity between super-pixel as similar matrix, leads to The mode for crossing continuous iteration filters out the data that can become representative sample in all samples.
2. image repeat element dividing method according to claim 1, which is characterized in that the reading image, by line Property iteration cluster pretreatment grouping is carried out to image, formed has similar grain between adjacent pixels, color and brightness The step of block of pixels, comprising:
The spatial position feature and color characteristic of each pixel are combined, formed one five dimension feature vector Fk lk, ak, bk, xk,yk};
Wherein l, a, b attribute respectively correspond value of the pixel on the space LAB, and x, y represent the coordinate position of pixel in the picture, K indicates each pixel;
Randomly choose N number of seed point, N is determined by the size or number of super-pixel, within the specified range search and seed point away from From immediate pixel, the distance is the similarity distance D between pixel:
Wherein, dsAnd dcIt respectively represents the space length between pixel and the color distance on the space CIELAB is poor, with m and s Coordinate the relationship between the two attributes;Parameter s is got by the maximum probable value calculating in the space XY, and parameter m represents the space LAB The maximum value possible of upper distance;
The distance is determined by the feature of pixel;
Similar pixel is classified as one kind, finds corresponding classification for each pixel in image, then to each class in The average cluster center looked for novelty of all pixels point, iteration obtains new N number of super-pixel until convergence again.
3. image repeat element dividing method according to claim 2, which is characterized in that the m value range be [1, 40]。
4. image repeat element dividing method according to claim 2, which is characterized in that described image forms super-pixel Afterwards, foreground area is generated, during cluster, according to the color distance for covering super-pixel, space length and super-pixel The distance of the path center inputted to user is come the step of carrying out super-pixel merging, form connected region, comprising:
By all super-pixel bottom-up mergings, until all super-pixel are all classified as one kind, merging regulation compliance most low coverage Merge from preferential;
The minimum distance refers to covering the path that the color distance of super-pixel, space length and super-pixel are inputted to user The distance at center;
Since each super-pixel of the bottom, selects immediate two super-pixel of similarity to polymerize cluster, calculate two clusters Similarity:
GI, j=dc·σ(dP)·σ(ds);
Wherein, GI, jFor measuring the similarity between two clusters, the distance between two clusters, d are representedcIt is in RGB color Euclidean distance, dPIndicate the distance of two clusters spatially, the central point distance being defined as between two clusters, dsThen generation Average distance of two clusters of table to brush.
5. image repeat element dividing method according to claim 4, which is characterized in that width of the control super-pixel in brush There is more Gao Quanchong, by d in degreesValue replaces with ds/wi, wiIt is brush width, σ () refers to Sigmoid function, for limiting The space length of cluster is to overall influence;
If dPAnd dsTend to positive infinity, then the value of σ () mapping tends to 1.
6. image repeat element dividing method according to claim 5, which is characterized in that from the angle of spatial distribution and distance Degree analyzes the matching degree of each cluster and brush region:
R=β (μp→ss→p)+(1-β)(σp→ss→p);
Wherein, μp→sAnd μs→pIndicate measurement brush and the matching degree of each cluster in position, σp→sAnd σs→pIndicate each cluster with Matching capacity of the brush in distribution introduces β parameter to balance both abilities;μs→pIndicate the pen in from sampled point to candidate The average minimum distance of brush, μs→pIndicate average minimum distance of the brush to sampled point from candidate, σp→sIt indicates from sampled point To the standard deviation of the minimum distance of brush, σs→pIndicate the standard deviation of the minimum distance from brush to sampled point, the value of β It is 0.5.
7. image repeat element dividing method according to claim 6, which is characterized in that it is described by cluster result from upper and Under be diffused into each super-pixel, distribute different weight informations for each cluster result, the matching for obtaining each super-pixel misses The step of difference is prospect probability, comprising:
Basis based on super-pixel segmentation, the point of some fixed quantities of uniform sampling, which is used as, from each super-pixel represents, and new group At cluster just by it, internal all sampled points for being included are constituted;
By all super-pixel score normalizations to [0,255], score is close to 255 just closer to prospect;
Calculate the prospect expressive force of the super-pixel in each cluster:
There are super-pixel in merging process gradually with background area merger, and matching error is gradually increased;
In merging process there are super-pixel gradually with foreground area merger, matching error present reduction trend;
Wherein, RiThe prospect reference value of i-th of cluster, wikRefer to the weight of i-th of cluster comprising k-th of super-pixel, weight value For the sum of all pixels of i-th of cluster, the value of n is not fixed for each super-pixel;By all super-pixel expressive force score normalizings It turns to [0,255], and close to 255 score close to prospect, and 0 close to background.
8. image repeat element dividing method according to claim 7, which is characterized in that described by prospect super-pixel color Information is inputted as data, using the color similarity between super-pixel as similar matrix, is screened by way of continuous iteration Can include: as the step of data of representative sample in all samples out
The sample point for being best able to represent itself is found from all data, it is measured by Attraction Degree information and degree of membership information Between matching degree, and at the same time selecting representative sample;
The Attraction Degree information r (i, k) and degree of membership information a (i, k) between each sample are iterated to calculate each time, and calculation formula is such as Under:
Wherein, i indicates that sample, i ' indicate that other samples, k indicate candidate samples center, and k ' indicates other candidate samples centers, s (i, k) and s (i, k ') indicate the similarity matrix at different candidate samples centers, and r (k, k) reflection is that node k has and is not suitable for more Other cluster centres are divided into, r (i ', k) indicates the value other than r (i, k) and r (k, k), represents node k to other sections The Attraction Degree of point;
After calculating by each round, damped coefficient is added, by result and last result linear combination each time, substitution For the new value of information.
9. a kind of smart machine, which is characterized in that the smart machine includes: memory, processor and is stored in the storage On device and the image repeat element segmentation procedure that can run on the processor, described image repeat element segmentation procedure is by institute State the step of image repeat element dividing methods as described in any item such as claim 1-8 are realized when processor executes.
10. a kind of storage medium, which is characterized in that the storage medium is stored with image repeat element segmentation procedure, the figure It is realized when being executed by processor as repeat element segmentation procedure such as the described in any item image repeat element segmentations of claim 1-8 The step of method.
CN201910313823.2A 2019-04-18 2019-04-18 Image repetitive element segmentation method, intelligent device and storage medium Active CN110163869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910313823.2A CN110163869B (en) 2019-04-18 2019-04-18 Image repetitive element segmentation method, intelligent device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910313823.2A CN110163869B (en) 2019-04-18 2019-04-18 Image repetitive element segmentation method, intelligent device and storage medium

Publications (2)

Publication Number Publication Date
CN110163869A true CN110163869A (en) 2019-08-23
CN110163869B CN110163869B (en) 2023-01-03

Family

ID=67639603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910313823.2A Active CN110163869B (en) 2019-04-18 2019-04-18 Image repetitive element segmentation method, intelligent device and storage medium

Country Status (1)

Country Link
CN (1) CN110163869B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675431A (en) * 2019-10-08 2020-01-10 中国人民解放军军事科学院国防科技创新研究院 Three-dimensional multi-target tracking method fusing image and laser point cloud
CN110853077A (en) * 2019-10-17 2020-02-28 广西电网有限责任公司电力科学研究院 Self-adaptive infrared dynamic frame feature extraction method based on morphological change estimation
CN116090163A (en) * 2022-11-14 2023-05-09 深圳大学 A method of color selection for mosaic tiles and related equipment
CN117892231A (en) * 2024-03-18 2024-04-16 天津戎军航空科技发展有限公司 Intelligent management method for production data of carbon fiber magazine

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120275703A1 (en) * 2011-04-27 2012-11-01 Xutao Lv Superpixel segmentation methods and systems
CN105118049A (en) * 2015-07-22 2015-12-02 东南大学 Image segmentation method based on super pixel clustering
CN106981068A (en) * 2017-04-05 2017-07-25 重庆理工大学 A kind of interactive image segmentation method of joint pixel pait and super-pixel
US20170243345A1 (en) * 2016-02-19 2017-08-24 International Business Machines Corporation Structure-preserving composite model for skin lesion segmentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120275703A1 (en) * 2011-04-27 2012-11-01 Xutao Lv Superpixel segmentation methods and systems
CN105118049A (en) * 2015-07-22 2015-12-02 东南大学 Image segmentation method based on super pixel clustering
US20170243345A1 (en) * 2016-02-19 2017-08-24 International Business Machines Corporation Structure-preserving composite model for skin lesion segmentation
CN106981068A (en) * 2017-04-05 2017-07-25 重庆理工大学 A kind of interactive image segmentation method of joint pixel pait and super-pixel

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何豪杰: "具有重复场景元素的复杂自然图像颜色编辑", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
於敏等: "基于相似性和统计性的超像素的图像分割", 《计算机工程与应用》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675431A (en) * 2019-10-08 2020-01-10 中国人民解放军军事科学院国防科技创新研究院 Three-dimensional multi-target tracking method fusing image and laser point cloud
CN110853077A (en) * 2019-10-17 2020-02-28 广西电网有限责任公司电力科学研究院 Self-adaptive infrared dynamic frame feature extraction method based on morphological change estimation
CN116090163A (en) * 2022-11-14 2023-05-09 深圳大学 A method of color selection for mosaic tiles and related equipment
CN116090163B (en) * 2022-11-14 2023-09-22 深圳大学 Mosaic tile color selection method and related equipment
CN117892231A (en) * 2024-03-18 2024-04-16 天津戎军航空科技发展有限公司 Intelligent management method for production data of carbon fiber magazine
CN117892231B (en) * 2024-03-18 2024-05-28 天津戎军航空科技发展有限公司 Intelligent management method for production data of carbon fiber magazine

Also Published As

Publication number Publication date
CN110163869B (en) 2023-01-03

Similar Documents

Publication Publication Date Title
CN107808143B (en) Computer Vision-Based Dynamic Gesture Recognition Method
Cheng et al. Global contrast based salient region detection
Wang et al. Discriminative learning with latent variables for cluttered indoor scene understanding
CN107430771B (en) System and method for image segmentation
CN110163869B (en) Image repetitive element segmentation method, intelligent device and storage medium
Lalitha et al. A survey on image segmentation through clustering algorithm
Galleguillos et al. Context based object categorization: A critical survey
Wang et al. Probabilistic motion diffusion of labeling priors for coherent video segmentation
CN104200240B (en) A kind of Sketch Searching method based on content-adaptive Hash coding
US20110216976A1 (en) Updating Image Segmentation Following User Input
CN108510000A (en) The detection and recognition methods of pedestrian&#39;s fine granularity attribute under complex scene
CN102436636B (en) Method and system for segmenting hair automatically
TW201331772A (en) Image index generation method and apparatus
JP2006513468A (en) How to segment pixels in an image
CN101493887B (en) Eyebrow image segmentation method based on semi-supervised learning and hash index
CN110188763B (en) Image significance detection method based on improved graph model
CN104969240B (en) Method and system for image procossing
CN108629783A (en) Image partition method, system and medium based on the search of characteristics of image density peaks
CN107305691A (en) Foreground segmentation method and device based on images match
Wang et al. Adaptive nonlocal random walks for image superpixel segmentation
CN107730506A (en) Image partition method and image search method
Martin et al. A learning approach for adaptive image segmentation
Wang et al. Adaptive and fast image superpixel segmentation approach
CN117611620A (en) Image segmentation method based on super pixel region combination
Yang et al. View suggestion for interactive segmentation of indoor scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant