WO2018036462A1 - 图像分割的方法、计算机设备及存储介质 - Google Patents
图像分割的方法、计算机设备及存储介质 Download PDFInfo
- Publication number
- WO2018036462A1 WO2018036462A1 PCT/CN2017/098417 CN2017098417W WO2018036462A1 WO 2018036462 A1 WO2018036462 A1 WO 2018036462A1 CN 2017098417 W CN2017098417 W CN 2017098417W WO 2018036462 A1 WO2018036462 A1 WO 2018036462A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- image
- target area
- clustering
- brightness value
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present application relates to the field of image processing, and in particular, to a method for image segmentation, a computer device, and a storage medium.
- the segmentation method based on color features has higher requirements on the depth of color, and if the color feature of the segmentation target is not obvious, the segmentation fails. For example, when the color of the lips is very light and almost the same as the facial skin color, the segmentation method cannot successfully segment the lips.
- the energy functional is to first establish a parameter expression describing the characteristics of the set of regions.
- a method, computer device, and storage medium for image segmentation are provided.
- a method of image segmentation comprising:
- the original target image is segmented according to the result of the clustering.
- a computer device comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor such that the processor performs the following steps:
- the original target image is segmented according to the result of the clustering.
- One or more non-transitory computer readable storage media storing computer readable instructions, which when executed by one or more processors, cause the one or more processors to perform the following steps :
- the original target image is segmented according to the result of the clustering.
- FIG. 1 is a schematic diagram showing the internal structure of a terminal in an embodiment
- FIG. 2 is a schematic diagram showing the internal structure of a server in an embodiment
- FIG. 3 is a flow chart of a method for image segmentation in an embodiment
- 4A is a schematic diagram of points for extracting geometric features of a face in an embodiment
- 4B is a schematic diagram of points for extracting geometric features of a face in another embodiment
- FIG. 5 is a flow chart of a method for clustering pixel points according to extracted geometric features and color features in one embodiment
- Figure 6A is a schematic illustration of an original target image containing a lip region in one embodiment
- 6B is a schematic diagram of an original target image including a lip region in another embodiment
- 6C is a schematic view of the lip region filled and blurred after treatment in one embodiment
- FIG. 7 is a flow chart of a method for clustering pixel points according to color features and brightness values in one embodiment
- FIG. 8 is a flow chart of a method for blurring an image in an embodiment
- Figure 9 is a block diagram showing the structure of a computer device in an embodiment
- FIG. 10 is a structural block diagram of a first clustering module in an embodiment
- FIG. 11 is a structural block diagram of a second clustering module in an embodiment
- FIG. 12 is a structural block diagram of a fuzzy processing module in an embodiment.
- the internal structure of the terminal 100 is as shown in FIG. 1, including a processor connected through a system bus, an internal memory, a non-volatile storage medium, a network interface, an image acquisition device, and a display. Screen and input device.
- the non-volatile storage medium of the terminal 100 stores an operating system and computer readable instructions that are executed by the processor to implement a method of image segmentation.
- the processor is used to provide computing and control capabilities to support the operation of the entire terminal.
- the internal memory in the terminal stores computer readable instructions that, when executed by the processor, cause the processor to perform a method of image segmentation.
- the network interface is used to connect to the network for communication.
- the image acquisition device is used for image acquisition, such as image entry.
- the display screen of the terminal may be a liquid crystal display or an electronic ink display screen.
- the input device may be a touch layer covered on the display screen, or may be a button, a trackball or a touchpad provided on the outer casing of the electronic device, or may be an external device. Keyboard, trackpad or mouse.
- the terminal can be a mobile phone, a tablet computer, a notebook computer, a desktop computer, or the like.
- FIG. 1 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the terminal to which the solution of the present application is applied.
- the specific terminal may include a ratio. More or fewer components are shown in the figures, or some components are combined, or have different component arrangements.
- a server 200 is presented.
- the server includes a processor, a non-volatile storage medium, an internal memory, and a network interface connected by a system bus.
- the non-volatile storage medium of the server can store an operating system and computer readable instructions that, when executed, can cause the processor to perform a method of image segmentation.
- the server's processor is used to provide computing and control capabilities that support the operation of the entire server.
- the internal memory can store computer readable instructions that, when executed by the processor, cause the processor to perform a method of image segmentation.
- the server's network interface is used for network communication, such as sending assigned tasks.
- a method for image segmentation is proposed, which can be applied to both a terminal and a server, and specifically includes the following steps:
- Step 302 Acquire an original target image to be segmented.
- the original target image may be a color image or a grayscale image.
- the premise of segmenting the original target image is that the region to be segmented in the image has a certain geometric limitation.
- the original target image may be pre-stored in the memory, and when the image processing is started, the processor first extracts the original target image to be segmented from the memory.
- the original target image to be segmented may also be obtained by real-time shooting by a camera.
- Step 304 Extract geometric features of the region to be segmented in the original target image.
- the geometric features of the region to be segmented are extracted according to the geometry of the region to be segmented.
- the face image the face orientation technology can be used to extract the points describing the geometric features of the face, and then the extracted feature points are connected to form corresponding geometric features.
- FIG. 4A is a schematic diagram of a point for extracting a geometric feature of a face in an embodiment, wherein a point (65-82) describing the geometric feature of the lip region is included, and the lip will be described.
- the feature points of the part are connected to form the geometrical features of the lip region.
- 4B is a schematic diagram of points for extracting geometric features of a face in another embodiment.
- any computer image algorithm can be used to extract the geometric features of the region to be segmented. The algorithm for extracting geometric features is not limited here.
- Step 306 Acquire color features of each pixel in the original target image.
- the image is composed of one pixel, and each pixel corresponds to one color feature.
- the original target image may be a color image or a grayscale image. If the image is a color image, it may be combined using components of a plurality of color spaces, such as using three components of the Lab color space plus a CbCr component combination of the YCbCr color space. a color feature vector with five components (L, a, b, Cb, Cr), wherein L in Lab represents brightness, a represents a range from magenta to green, b represents a range from yellow to blue; and Cb in YCbCr represents a concentration of blue Offset, Cr represents the concentration offset of red. If the image is a grayscale image, then the color feature can only be used with grayscale information. Obtaining the color feature of each pixel in the original target image is to obtain a color feature vector representing the pixel point.
- Step 308 Cluster each pixel point in the original target image according to the extracted geometric feature and the color feature of each pixel.
- the target region and the non-target region to be segmented in the original target image are determined according to the extracted geometric features, and the target region and the non-target region obtained according to the geometric feature are not necessarily accurate, and sometimes exist. Large deviation, so it is necessary to further correct the division of the target area and the non-target area by the color feature of each pixel to obtain a more accurate division.
- the target area and the non-target area are filled with different colors to distinguish the target area from the non-target area, and the filled image is performed.
- Blurring extracting parameters used to represent different color features in the image after blurring, for example, extracting the brightness value of each pixel in the image after the blurring process, and then according to the color characteristics of each pixel The corresponding brightness value clusters each pixel in the original target image.
- the limitation is not limited to the extraction of the brightness value, but also other parameters that can reflect the color feature, such as chroma, saturation. Wait.
- Step 310 Segment the original target image according to the result of the clustering.
- the result obtained by clustering is the probability that each pixel belongs to the target area, and after obtaining the probability that each pixel belongs to the target area, it can be binarized, that is, greater than the preset.
- the pixel points of the probability value are divided into target regions, and pixel points smaller than or equal to the preset probability value are divided into non-target regions.
- the region to be segmented by acquiring the original target image to be segmented, extracting geometric features of the region to be segmented in the original target image, and acquiring color features of each pixel in the original target image, according to the extracted geometric features and each pixel point.
- the color features cluster each pixel in the original target image.
- the step 308 of clustering individual pixels in the original target image based on the extracted geometric features and the color features of each pixel includes:
- Step 502 Determine a target area and a non-target area to be divided in the original target image according to the extracted geometric features.
- the original target image is divided into two parts according to the extracted geometric features of the region to be segmented, one is a target region, and one is a non-target region.
- the target area may be one or multiple.
- the facial features in the face can be simultaneously used as the target area according to the geometric features of the extracted facial features (eyebrow, eye, nose, lip, and facial contour), and other parts.
- a non-target area it is also possible to use only a certain area as a target area, for example, extract only the geometric features of the lip, the lip as the target area, and the other as the non-target area.
- step 504 the target area and the non-target area are filled with colors having different luminance values to distinguish the target area from the non-target area.
- the target area and the non-target area to be divided in the original target image are determined, the target area and the non-target area are filled with colors having different luminance values to distinguish the target area from the non-target area.
- the difference in the brightness values of the fill colors used is as large as possible.
- the target area may be filled with white (luminance value of 255), the non-target area is filled with black (luminance value of 0) or the target area is filled with black, and the non-target area is filled with white. .
- Step 506 Perform a blurring process on the filled image to obtain a brightness value corresponding to each pixel in the image subjected to the blurring process.
- the target area obtained according to the extracted geometric features of the area to be divided is not necessarily accurate, sometimes there is a large deviation, so it is necessary to blur the filled image. Then, the subsequent adjustment of the target area and the non-target area of the division by the color feature can be more accurately divided.
- the blur radius needs to be determined.
- the size of the blur radius depends on the deviation of the geometric features of the region to be segmented. If the deviation is large, the blur radius should be increased, and the deviation is small to reduce the blur radius.
- the size of the fuzzy radius can be preset according to the empirical value. After determining the size of the fuzzy radius, the filled image can be blurred by Gaussian blur algorithm.
- FIG. 6A is an original target image including a lip region in one embodiment
- FIG. 6B is a schematic view of an original target image including a lip region in another embodiment
- FIG. 6C is a view of filling the extracted lip region.
- a schematic diagram after blurring in which the extracted lip region is filled with white, and the other non-target regions are filled with black.
- the brightness value corresponding to each pixel point is calculated, and the calculation of the brightness value can be performed by using an existing calculation method.
- the target area and the non-target area may be filled with black and white, so that the calculation of the luminance value can be simplified to the calculation of the gray value.
- Step 508 Cluster each pixel point in the original target image according to the color feature of each pixel and the corresponding brightness value.
- each pixel has unique coordinates in the image, and the color features and luminance values of the pixel points corresponding to the same coordinate position before and after the image processing are combined to obtain a multi-dimensional feature vector representing the pixel. For example, if each pixel in the original target image corresponds to a 5-latitude color feature vector (L, a, b, Cb, Cr), then the brightness value B corresponding to each pixel after the blurring process (Brightness) , brightness) after combination becomes a 6-latitude feature vector (L, a, b, Cb, Cr, B). The pixel points in the original target image are clustered according to the obtained multi-dimensional feature vector corresponding to each pixel point.
- first determining an initial membership degree corresponding to each pixel point according to a brightness value B corresponding to each pixel point in the processed image; secondly, according to an initial membership degree corresponding to each pixel point and corresponding to each pixel point
- the multi-dimensional feature vector calculates the initial center value of the target area and the non-target area; finally, clusters the pixel points in the original target image according to the initial membership degree and the initial center value and the multi-dimensional feature vector corresponding to each pixel point to obtain each pixel
- the probability that a point belongs to the target area Performing an image based on the probability that each pixel point belongs to the target area Segmentation.
- the region is first divided by extracting geometric features of the region to be segmented, and then the target region divided by the geometric feature is corrected according to the color feature, so that even if the color feature of the segmented target region is not obvious, due to geometry
- the limitation of the feature can also accurately divide the target area, and the area can be more accurately segmented by combining the geometric features and the color features of the area to be divided.
- the step 508 of clustering each pixel in the original target image according to the color feature of each pixel and the corresponding luminance value includes:
- Step 508A Determine, according to the brightness value corresponding to each pixel point, an initial membership degree of each pixel point belonging to the target area and the non-target area.
- the membership degree belongs to the concept in the fuzzy evaluation function. Specifically, if any element in the universe (the scope of the study) U has a number A(x) ⁇ [0, 1] and Correspondingly, A is called the fuzzy set on U, and A(x) becomes the membership of x to A.
- A(x) is a function called A's membership function. The closer the membership degree A(x) is to 1, the higher the degree of x belongs to A, the closer the membership degree A(x) is to 0, the lower the degree that x belongs to A, that is, the value interval [0,1]
- the membership function A(x) indicates the degree of belonging to A.
- the initial membership degree is determined to calculate the probability that each pixel point ultimately belongs to the target area as an initial value.
- the initial membership is determined based on the brightness value.
- the initial membership degree of the pixel point that is greater than the preset brightness value belonging to the target area is set to 1, and the initial membership degree of the pixel point that is not greater than the preset brightness value belongs to the target area is set to 0, then the initial membership degree of the pixel point larger than the preset brightness value belonging to the non-target area is 0, and the initial membership degree of the pixel point not larger than the preset brightness value belonging to the target area is 1.
- Calculation initial membership degree Specifically, with reference to FIG. 6B, x i belonging to the target region is: if the luminance value on the x i position in FIG. 6B is greater than 128, then the initial membership is 1, less than or equal 128 initial membership Degree is 0.
- the initial membership degree of x i belonging to the non-target area is exactly opposite to the initial membership degree belonging to the target area, that is, the luminance value is greater than 128, the initial membership degree is 0, and the initial membership degree is 128 or less.
- Step 508B Determine initial center values of the target area and the non-target area according to the color features of each pixel and the corresponding brightness values, respectively.
- the feature data x i is the multi-dimensional feature vector corresponding to each pixel point (composed of the color feature and the brightness value) in this embodiment; u ij is the initial membership degree of each pixel obtained belonging to the category j.
- the cluster center of the target area that is, the initial center value is calculated by calculating all the multi-dimensional values in the original target image (FIG. 6A) in FIG. 6B for all the pixel points whose luminance value is equal to 255.
- the tie value of the feature vector is the initial center value c 1 .
- all pixels whose luminance values are equal to 0 in the non-target area in FIG. 6B calculate the tie value c 2 of the multi-dimensional feature vector in the original target image.
- c 1 and c 2 are cluster centers, that is, initial center values of the target area and the non-target area, respectively.
- Step 508C Clustering the pixel points in the original target image according to the initial membership degree and the initial center value and the color features of each pixel and the corresponding brightness value to obtain a probability that each pixel point belongs to the target area.
- the initial membership degree and the initial center value are taken as initial parameters, and The color feature of each pixel and the corresponding luminance value are combined into a multi-dimensional feature vector as an input variable, and then clustering algorithm is used for clustering calculation.
- the FCM clustering algorithm may be used for iterative calculation to obtain the final membership degree of each pixel point belonging to the target region, and the region is segmented according to the final membership degree (probability). Specifically, through the formula with Perform an update iteration to make the objective function It is extremely small.
- is a norm calculation symbol, which is used to calculate the similarity between any measurement data and the cluster center, k represents the value range of C, and C represents the number of categories.
- the padded image is subjected to blurring processing, and the step 506 of calculating the luminance value of each pixel in the image after the blurring process includes:
- Step 506A Acquire a preset blur radius, and perform blur processing on the filled image according to the blur radius.
- the filled image needs to be blurred, because the target area and the non-target area divided by the geometric feature are not necessarily accurate. There may be deviations, so the filled image needs to be blurred, and then the target area and the non-target area of the division are corrected and adjusted to obtain more precise division.
- the size of the blur radius can be preset. The size of the preset blur radius depends on the deviation. The deviation is an empirical value. The deviation range of the algorithm for extracting geometric features can be evaluated through multiple tests. Then the fuzzy radius can be determined according to the deviation range. The fuzzy radius generally takes a value greater than the maximum deviation. For example, the deviation range is 0-10, then the blur radius can be a value greater than 10, so that the target region can be divided more accurately later.
- Step 506B Calculate a brightness value corresponding to each pixel in the image subjected to the blurring process.
- the target area and the non-target area are respectively filled with colors having different luminance values, and then the filled image is blurred. Processing, thus obtaining a new target image, and calculating the brightness value corresponding to each pixel in the new target image.
- the target area is generally filled with white, and the non-target area is filled with black, so that not only the target area and the non-target area can be better distinguished, but also the calculation of the brightness value can be simplified, because the brightness is calculated by black and white filling. For the value, only the gradation calculation is needed to obtain the brightness value corresponding to each pixel. If you fill in color, you need to perform a weighted average calculation on the values of the three colors to calculate the brightness value to get the brightness value corresponding to the pixel.
- the segmentation of the region is performed according to the probability that each pixel point obtained by the cluster belongs to the target region, wherein the pixel region larger than the preset probability value is divided into the target region, which is less than or equal to the preset probability value.
- the pixels are divided into non-target areas.
- the probability that each pixel belongs to the target area is obtained after the end of the clustering iteration.
- the obtained probability is subjected to a binarization process, that is, whether each pixel point belongs to the target area according to the probability.
- a probability value may be set, for example, the probability value is set to 0.6, the pixel point with the probability greater than 0.6 is divided into the target region, and the pixel point with the probability value less than or equal to 0.6 is divided into the non-target region.
- the embodiment of the present invention provides a computer device.
- the internal structure of the computer device may correspond to the structure as shown in FIG. 2 or FIG. 3, and each of the following modules may be implemented in whole or in part by software, hardware, or a combination thereof.
- the computer device 800 in this embodiment includes:
- the image obtaining module 902 is configured to acquire an original target image to be segmented.
- the extraction module 904 is configured to extract geometric features of the region to be segmented in the original target image.
- the color feature acquisition module 906 is configured to acquire color features of each pixel in the original target image.
- the first clustering module 908 is configured to cluster each pixel in the original target image according to the extracted geometric feature and the color feature of each pixel.
- the segmentation module 910 is configured to segment the original target image according to the result of the clustering.
- the first clustering module 908 includes:
- the determining module 1002 is configured to determine a target area and a non-target area to be divided in the original target image according to the extracted geometric features.
- the filling module 1004 is configured to fill the target area and the non-target area with colors having different brightness values to distinguish the target area from the non-target area.
- the blur processing module 1006 is configured to perform blur processing on the filled image, and calculate a brightness value corresponding to each pixel in the image after the blur processing.
- the second clustering module 1008 is configured to cluster each pixel point in the original target image according to the color feature of each pixel and the corresponding brightness value.
- the second clustering module 1008 includes:
- the initial membership degree determining module 1008A is configured to determine, according to the brightness value corresponding to each pixel point, an initial membership degree of each pixel point belonging to the target area and the non-target area, respectively.
- the initial center value determining module 1008B is configured to determine initial center values of the target area and the non-target area respectively according to the color features of each pixel point and the corresponding brightness values.
- the third clustering module 1008C is configured to cluster pixel points in the original target image according to the initial membership degree and the initial center value and the color features of each pixel and the corresponding brightness value to obtain each pixel point belonging to the target area. The probability.
- the blurring processing module 1006 includes:
- the fuzzy radius obtaining module 1006A is configured to acquire a preset blur radius, and perform blur processing on the filled image according to the blur radius.
- the brightness value calculation module 1006B is configured to calculate a brightness value corresponding to each pixel point in the image subjected to the blurring process.
- the segmentation module is further configured to perform segmentation according to a probability that each pixel point obtained by the cluster belongs to the target region, wherein a pixel point larger than the preset probability value is divided into the target region, which is less than or Pixels equal to the preset probability value are divided into non-target regions.
- the program may be implemented by a computer program for instructing related hardware, and the computer program may be stored in a computer readable storage medium, which, when executed, may include the flow of an embodiment of the methods described above.
- the storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
一种图像分割的方法,包括:获取待分割的原始目标图像,提取所述原始目标图像中待分割区域的几何特征,获取所述原始目标图像中每个像素点的颜色特征,根据提取的所述几何特征和所述每个像素点的颜色特征对原始目标图像中的各个像素点进行聚类,及根据聚类的结果将所述原始目标图像进行分割。
Description
本申请要求于2016年8月22日提交中国专利局、申请号为2016107020518、发明名称为“图像分割的方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及图像处理领域,特别是涉及一种图像分割的方法、计算机设备及存储介质。
随着图像技术的发展,如何对图像进行分割显得越来越重要。传统的对图像的区域进行分割主要有两种方法,一种是基于颜色特征的分割方法,一种是基于边缘特征和能量泛函的分割方法。其中,基于颜色特征的分割方法对颜色的深度有较高的要求,如果分割目标的颜色特征不明显则会造成分割失败。比如,当嘴唇的颜色很浅几乎和脸部肤色一致时,用这种分割方法就无法成功分割出嘴唇区域。能量泛函本质上就是先建立描述区域集合特征的参数表达式,通过调节参数来改变区域的形状,当定义的能量泛函取最小值时,表达式所表示的几何图形完全切合实际的区域边缘,其缺点是在区域的几何特征比较复杂的情况下建立参数表达式比较困难,且参数众多,参数众多本身又会导致能量函数求解极小值的难度非常大,算法的运行效率也变得非常低,且其所描绘的曲线一般是连续而且平滑的,而实际上某些分割对象本身的边界并不是平滑的,所以区域分割效果不好。
发明内容
根据本申请的各种实施例,提供一种图像分割的方法、计算机设备及存储介质。
一种图像分割的方法,包括:
获取待分割的原始目标图像;
提取所述原始目标图像中待分割区域的几何特征;
获取所述原始目标图像中每个像素点的颜色特征;
根据提取的所述几何特征和所述每个像素点的颜色特征对原始目标图像中的各个像素点进行聚类;及
根据聚类的结果将所述原始目标图像进行分割。
一种计算机设备,包括存储器和处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:
获取待分割的原始目标图像;
提取所述原始目标图像中待分割区域的几何特征;
获取所述原始目标图像中每个像素点的颜色特征;
根据提取的所述几何特征和所述每个像素点的颜色特征对原始目标图像中的各个像素点进行聚类;及
根据聚类的结果将所述原始目标图像进行分割。
一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,可使得所述一个或多个处理器执行以下步骤:
获取待分割的原始目标图像;
提取所述原始目标图像中待分割区域的几何特征;
获取所述原始目标图像中每个像素点的颜色特征;
根据提取的所述几何特征和所述每个像素点的颜色特征对原始目标图像中的各个像素点进行聚类;及
根据聚类的结果将所述原始目标图像进行分割。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本发明的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为一个实施例中终端的内部结构示意图;
图2为一个实施例中服务器的内部结构示意图;
图3为一个实施例中图像分割的方法流程图;
图4A为一个实施例中提取人脸几何特征的点的示意图;
图4B为另一个实施例中提取人脸几何特征的点的示意图;
图5为一个实施例中根据提取的几何特征和颜色特征对像素点进行聚类的方法流程图;
图6A为一个实施例中的包含有唇部区域的原始目标图像的示意图;
图6B为另一个实施例中的包含有唇部区域的原始目标图像的示意图;
图6C为一个实施例中唇部区域填充并进行模糊处理后的示意图;
图7为一个实施例中根据颜色特征和亮度值对像素点进行聚类的方法流程图;
图8为一个实施例中对图像进行模糊处理的方法流程图;
图9为一个实施例中计算机设备的结构框图;
图10为一个实施例中第一聚类模块的结构框图;
图11为一个实施例中第二聚类模块的结构框图;
图12为一个实施例中模糊处理模块的结构框图。
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
如图1所示,在一个实施例中,终端100的内部结构如图1所示,包括通过系统总线连接的处理器、内存储器、非易失性存储介质、网络接口、图像采集装置、显示屏和输入装置。其中,终端100的非易失性存储介质存储有操作系统和计算机可读指令,该计算机可读指令被处理器执行时以实现一种图像分割的方法。该处理器用于提供计算和控制能力,支撑整个终端的运行。终端中的内存储器中存有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器执行一种图像分割的方法。网络接口用于连接到网络进行通信。图像采集装置用于图像的采集,比如进行图像的录入。终端的显示屏可以是液晶显示屏或者电子墨水显示屏等,输入装置可以是显示屏上覆盖的触摸层,也可以是电子设备外壳上设置的按键、轨迹球或触控板,也可以是外接的键盘、触控板或鼠标等。该终端可以是手机、平板电脑、笔记本电脑、台式计算机等。本领域技术人员可以理解,图1中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的终端的限定,具体的终端可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
如图2所示,在一个实施例中,提出了一种服务器200。参照图2,该服务器包括通过系统总线连接的处理器、非易失性存储介质、内存储器和网络接口。其中,该服务器的非易失性存储介质可存储操作系统和计算机可读指令,该计算机可读指令被执行时,可使得处理器执行一种图像分割的方法。该服务器的处理器用于提供计算和控制能力,支撑整个服务器的运行。该内存储器中可储存有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器执行一种图像分割的方法。服务器的网络接口用于进行网络通信,如发送分配的任务等。本领域技术人员可以理解,图2中示出的结构,仅仅是与
本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的服务器的限定,具体的服务器可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
如图3所示,在一个实施例中,提出了一种图像分割的方法,该方法既可以应用于终端也可以应用于服务器,具体包括以下步骤:
步骤302,获取待分割的原始目标图像。
具体的,原始目标图像可以是彩色图像,也可以是灰度图像。对原始目标图像进行分割的前提是图像中的待分割区域具有一定的几何形状限制。原始目标图像可以预先存储在内存中,当开始对图片处理时,处理器首先从内存中提取待分割的原始目标图像。在另一个实施例中,待分割的原始目标图像也可以是通过摄像头实时拍摄得到的。
步骤304,提取原始目标图像中待分割区域的几何特征。
在本实施例中,根据待分割区域本身具有的几何形状来提取待分割区域的几何特征。提取待分割区域的几何特征有很多种方法,针对不同的图像可以采取不同的方法。其中,对于人脸图像,可以采用人脸定位技术提取描述人脸几何特征的点,然后将提取的特征点连接起来就形成了对应的几何特征。以提取人脸图片中的唇部为例,图4A为一个实施例中提取人脸几何特征的点的示意图,其中,包括了描述唇部区域几何特征的点(65-82),将描述唇部的特征点连接起来就构成了该唇部区域的几何特征。图4B为另一个实施例中提取人脸几何特征的点的示意图。对于非人脸图像,可以采用任何计算机图像算法来提取待分割区域的几何特征,这里并不对提取几何特征的算法进行限制。
步骤306,获取原始目标图像中每个像素点的颜色特征。
具体的,图像是由一个个像素点组成的,每个像素点都对应一个颜色特征。原始目标图像可以是彩色图像也可以是灰度图像,若图像是彩色图像,可以使用多种颜色空间的分量进行组合,比如使用Lab颜色空间的三个分量加上YCbCr颜色空间的CbCr分量组合为一个包含五个分量的颜色特征矢量
(L,a,b,Cb,Cr),其中,Lab中的L表示亮度,a表示从洋红色到绿色的范围,b表示从黄色至蓝色的范围;YCbCr中的Cb代表蓝色的浓度偏移量,Cr代表红色的浓度偏移量。若图像为灰度图像,那么颜色特征只用灰度信息就可以了。获取原始目标图像中每个像素点的颜色特征,就是要获取表示像素点的颜色特征矢量。
步骤308,根据提取的几何特征和每个像素点的颜色特征对原始目标图像中的各个像素点进行聚类。
在本实施例中,首先,根据提取的几何特征确定原始目标图像中待分割的目标区域和非目标区域,由于根据几何特征划分得到的目标区域和非目标区域并不一定准确,有时还会存在较大偏差,所以需要进一步结合每个像素点的颜色特征对划分的目标区域和非目标区域进行校正调整得到更精确的划分。具体的,根据几何特征确定原始目标图像中待分割的目标区域和非目标区域后,将目标区域和非目标区域用不同的颜色进行填充以区分目标区域和非目标区域,将填充后的图像进行模糊处理,提取进行模糊处理后的图像中的用于表征不同颜色特征的参数,比如,可以提取进行模糊处理后的图像中每个像素点的亮度值,然后根据每个像素点的颜色特征和对应的亮度值对原始目标图像中的各个像素点进行聚类,需要说明的是,这里并不限于对亮度值的提取,还可以是其他可以反应颜色特征的参数,比如,彩度、饱和度等。
步骤310,根据聚类的结果将原始目标图像进行分割。
在本实施例中,经过聚类得到的结果是每个像素点隶属于目标区域的概率,得到每个像素点隶属于目标区域的概率后,可以对其进行二值化处理,即将大于预设概率值的像素点划分为目标区域,将小于或等于预设概率值的像素点划分为非目标区域。
在本实施例中,通过获取待分割的原始目标图像,提取原始目标图像中待分割区域的几何特征,获取原始目标图像中每个像素点的颜色特征,根据提取的几何特征和每个像素点的颜色特征对原始目标图像中的各个像素点进行聚类。上述图像分割的方法,通过将几何特征和每个像素点的颜色特征结
合对原始目标图像中的各个像素点进行聚类,这样即使待分割区域的颜色特征不明显,由于几何特征的限制也能精确的将区域进行分割。因此,通过将待分割区域的几何特征和颜色特征结合可以更加精确的对区域进行分割。
如图5所示,在一个实施例中,根据提取的几何特征和每个像素点的颜色特征对原始目标图像中的各个像素点进行聚类的步骤308包括:
步骤502,根据提取的几何特征确定原始目标图像中待分割的目标区域和非目标区域。
在本实施例中,提取了待分割区域的几何特征后,根据提取的待分割区域的几何特征将原始目标图像划分成了两个部分,一个是目标区域,一个是非目标区域。其中,目标区域可以是一个,也可以是多个。比如,以人脸图片为例,如图4所示,可以根据提取的五官(眉、眼、鼻、唇、脸部外轮廓)的几何特征将人脸中的五官同时作为目标区域,其他部分作为非目标区域。当然也可以只将某个区域作为目标区域,比如,只提取唇部的几何特征,将唇部作为目标区域,其他作为非目标区域。
步骤504,将目标区域和非目标区域用亮度值不同的颜色进行填充以区分目标区域和非目标区域。
在本实施例中,确定完原始目标图像中待分割的目标区域和非目标区域后,将目标区域和非目标区域用亮度值不同的颜色进行填充以区分该目标区域和非目标区域。为了更好的将目标区域和非目标区域区分开,使用的填充颜色的亮度值相差越大越好。在一个实施例中,可以将目标区域以白色(亮度值为255)进行填充,非目标区域以黑色(亮度值为0)进行填充或者将目标区域以黑色进行填充,非目标区域以白色进行填充。
步骤506,将填充后的图像进行模糊处理,获取进行模糊处理后的图像中每个像素点对应的亮度值。
在本实施例中,由于根据提取的待分割区域的几何特征得到的目标区域并不一定准确,有时会存在较大偏差,所以需要对填充后的图像进行模糊处
理,后续再结合颜色特征对划分的目标区域和非目标区域进行校正调整得到更精确的划分。具体地,对图像进行模糊处理,首先需要确定模糊半径,模糊半径的大小取决于提取待分割区域几何特征的偏差大小,偏差大则应该增大模糊半径,偏差小则降低模糊半径。模糊半径的大小可以根据经验值进行预先设置,确定完模糊半径的大小后,可以采用高斯模糊算法对填充后的图像进行模糊处理。图6A为一个实施例中包含有唇部区域的原始目标图像,图6B为另一个实施例中包含有唇部区域的原始目标图像的示意图;图6C为将提取出来的唇部区域进行填充后进行模糊处理后的示意图,其中,提取的唇部区域是以白色进行填充的,其他非目标区域以黑色进行填充的。将填充后的图像进行模糊处理后,计算每个像素点对应的亮度值,亮度值的计算可以采用现有的计算方法,这里不对亮度值的计算做任何限制,为了便于亮度值的计算,在一个实施例中,可以选择用黑白色对目标区域和非目标区域进行填充,这样亮度值的计算可以简化为对灰度值的计算。
步骤508,根据每个像素点的颜色特征和对应的亮度值对原始目标图像中的各个像素点进行聚类。
在本实施例中,每个像素点在图像中都有唯一的坐标,将图像处理前后同一坐标位置对应的像素点的颜色特征和亮度值合并,得到一个表示该像素点的多维的特征矢量。比如,若原始目标图像中每个像素点对应的是一个5纬颜色特征矢量(L,a,b,Cb,Cr),那么和进行模糊处理后的每个像素点对应的亮度值B(Brightness,亮度)组合之后就变成了一个6纬特征矢量(L,a,b,Cb,Cr,B)。根据得到的每个像素点对应的多维特征矢量对原始目标图像中的像素点进行聚类。具体的,首先,根据处理后的图像中每个像素点对应的亮度值B确定每个像素点对应的初始隶属度;其次,根据每个像素点对应的初始隶属度和每个像素点对应的多维特征矢量计算目标区域和非目标区域的初始中心值;最后,根据初始隶属度和初始中心值以及每个像素点对应的多维特征矢量对原始目标图像中的像素点进行聚类得到每个像素点隶属于目标区域的概率。根据得到的每个像素点隶属于目标区域的概率进行图像
的分割。
在本实施例中,先是通过提取待分割区域的几何特征进行区域的划分,然后再根据颜色特征来校正通过几何特征划分出的目标区域,这样即使分割的目标区域的颜色特征不明显,由于几何特征的限制也能精确的将目标区域划分出来,通过将待分割区域的几何特征和颜色特征结合可以更加精确的对区域进行分割。
如图7所示,在一个实施例中,根据每个像素点的颜色特征和对应的亮度值对原始目标图像中的各个像素点进行聚类的步骤508包括:
步骤508A,根据每个像素点对应的亮度值分别确定每个像素点隶属于目标区域和非目标区域的初始隶属度。
在本实施例中,隶属度属于模糊评价函数里的概念,具体的,若对论域(研究的范围)U中的任一元素,都有一个数A(x)∈[0,1]与之对应,则称A为U上的模糊集,A(x)成为x对A的隶属度。当x在U中变动时,A(x)就是一个函数,称为A的隶属函数。隶属度A(x)越接近于1,表示x属于A的程度越高,隶属度A(x)越接近于0,表示x属于A的程度越低,即用取值区间[0,1]的隶属函数A(x)表示属于A的程度高低。在进行聚类之前,首先需要确定每个像素点隶属于目标区域的初始隶属度和每个像素点隶属于非目标区域的初始隶属度。初始隶属度的确定是为了作为初始值计算每个像素点最终属于目标区域的概率。隶属于目标区域的概率(隶属度)和隶属于非目标区域的概率(隶属度)之和为1。假设像素点隶属于目标区域的隶属度为A(xi),隶属于非目标区域的隶属度为B(xi),其中,A(xi)+B(xi)=1,xi表示第i个测量到的数据。初始隶属度根据亮度值确定。具体的,在一个实施例中,将大于预设亮度值的像素点隶属于目标区域的初始隶属度设为1,将不大于预设亮度值的像素点隶属于目标区域的初始隶属度设为0,那么大于预设亮度值的像素点隶属于非目标区域的初始隶属度为0,不大于预设亮度值的像素点隶属于目标区域的初始隶属度为1。具体的,参考图6B,xi属于目标区域的初始隶属度的计算方法为:如果xi在图6B中的位置
上的亮度值大于128,则初始隶属度为1,小于等于128则初始隶属度为0。xi属于非目标区域的初始隶属度刚好和属于目标区域的初始隶属度相反,即亮度值大于128,初始隶属度为0,小于等于128初始隶属度为1。
步骤508B,根据每个像素点的颜色特征和对应的亮度值分别确定目标区域和非目标区域的初始中心值。
在本实施例中,在进行聚类前,需要先确定目标区域和非目标区域的聚类中心即初始中心值。计算初始中心值,可以采用加权平均的计算方法。具体的,可以通过公式计算得到,其中,cj表示类j的聚类中心;xi表示第i个测量到的多维数据;uij是xi属于类别j的隶属度;m是控制算法柔性的一个参数,一般取m=2;N表示总的数据个数。其中,特征数据xi在本实施例中就是每个像素点对应的多维特征矢量(由颜色特征和亮度值组成);uij就是得到的每个像素点属于类别j的初始隶属度。在一个实施例中,参考图6B,目标区域的聚类中心即初始中心值的计算方法为:在图6B中所有亮度值等于255的像素点计算其在原始目标图像(图6A)中的多维特征矢量的平局值即初始中心值c1。同样的,在图6B中非目标区域中所有亮度值等于0的像素计算其在原始目标图像中的多维特征矢量的平局值c2。c1和c2分别是目标区域和非目标区域的聚类中心即初始中心值。
步骤508C,根据初始隶属度和初始中心值以及每个像素点的颜色特征和对应的亮度值对原始目标图像中的像素点进行聚类得到每个像素点隶属于目标区域的概率。
在本实施例中,确定了每个像素点隶属于目标区域和非目标区域的初始隶属度和目标区域及非目标区域的聚类中心后,将初始隶属度和初始中心值作为初始参数,将每个像素点的颜色特征和对应的亮度值组合为一个多维特征矢量作为输入变量,然后使用聚类算法进行聚类计算。在一个实施例中,可以采用FCM聚类算法进行迭代计算得到每个像素点隶属于目标区域的最终隶属度,根据该最终隶属度(概率)进行区域的分割。具体的,通过公式和
进行更新迭代使得目标函数达到极小。其中,m为大于1的实数,一般取m=2,uij是xi属于类别j的隶属度,cj表示类j的聚类中心,xi表示第i个测量到的多维数据,||*||为范数计算符号,这里用来计算得到任一测量数据与聚类中心的相似度,k表示C的取值范围,而C表示类别数量。首先,通过将确定的初始隶属度代入上述公式(1)计算聚类中心cj,然后再代入上述公式(2)计算新一轮的隶属度通过这样不断的迭代直到时,迭代停止。其中0<ε<1是迭代终止参数,n是迭代轮数,在这个过程中Jm收敛到一个极小值。经过迭代计算可以得到每个像素点隶属于目标区域的最终隶属度,根据该最终隶属度(概率)进行图像的分割。
如图8所示,在一个实施例中,将填充后的图像进行模糊处理,计算进行模糊处理后的图像中每个像素点的亮度值的步骤506包括:
步骤506A,获取预设的模糊半径,根据模糊半径对填充后的图像进行模糊处理。
在本实施例中,对原始目标图像中的目标区域和非目标区域进行填充后,需要对填充后的图像进行模糊处理,这是因为通过几何特征划分的目标区域和非目标区域不一定准确,可能出现偏差,所以需要对填充后的图像进行模糊处理,后续再结合颜色特征对划分的目标区域和非目标区域进行校正调整得到更精确的划分。进行模糊处理前,首先需要确定模糊半径,模糊半径的大小是可以进行预先设置的,预先设置的模糊半径的大小取决于偏差大小。偏差大小是一个经验值,可以经过多次测试评估出提取几何特征的算法的偏差范围,然后根据偏差范围可以确定模糊半径,模糊半径一般取大于最大偏差的值。比如偏差范围是0-10,那么模糊半径可以是大于10的值,这样便于后续能够更加精确的对目标区域进行划分。
步骤506B,计算进行模糊处理后的图像中每个像素点对应的亮度值。
在本实施例中,为了将目标区域和非目标区域进行区分,分别将目标区域和非目标区域用亮度值不同的颜色进行填充,然后对填充后的图像进行模糊
处理,这样就得到了一个新的目标图像,计算该新的目标图像中每个像素点对应的亮度值。为了便于亮度值的计算,一般对目标区域进行白色填充,非目标区域进行黑色填充,这样不但可以更好的区分目标区域和非目标区域,还可以简化亮度值的计算,因为通过黑白填充计算亮度值时只需要进行灰度计算就可以得到每个像素点对应的亮度值。如果用彩色进行填充,那么计算亮度值时需要对三种颜色的值进行加权平均计算才能得到像素点对应的亮度值。
在一个实施例中,根据聚类得到的每个像素点隶属于目标区域的概率进行区域的分割,其中,将大于预设概率值的像素点划分为目标区域,将小于或等于预设概率值的像素点划分为非目标区域。
在本实施例中,经过聚类迭代结束时得到每个像素点属于目标区域的隶属度即最终每个像素点属于目标区域的概率。将得到的概率进行一个二值化处理,即根据概率判断每个像素点是否属于目标区域。具体的,可以设置一个概率值,比如概率值设为0.6,将概率大于0.6的像素点划分为目标区域,将概率值小于或等于0.6的像素点划分为非目标区域。
本发明实施例提供了一种计算机设备,计算机设备的内部结构可对应于如图2或如图3所示的结构,下述每个模块可全部或部分通过软件、硬件或其组合来实现。
在一个实施例中,如图9所示,该实施例中的计算机设备800包括:
图像获取模块902,用于获取待分割的原始目标图像。
提取模块904,用于提取原始目标图像中待分割区域的几何特征。
颜色特征获取模块906,用于获取原始目标图像中每个像素点的颜色特征。
第一聚类模块908,用于根据提取的几何特征和每个像素点的颜色特征对原始目标图像中的各个像素点进行聚类。
分割模块910,用于根据聚类的结果将原始目标图像进行分割。
如图10所示,在一个实施例中,第一聚类模块908包括:
确定模块1002,用于根据提取的几何特征确定原始目标图像中待分割的目标区域和非目标区域。
填充模块1004,用于将目标区域和非目标区域用亮度值不同的颜色进行填充以区分目标区域和非目标区域。
模糊处理模块1006,用于将填充后的图像进行模糊处理,计算进行模糊处理后的图像中每个像素点对应的亮度值。
第二聚类模块1008,用于根据每个像素点的颜色特征和对应的亮度值对原始目标图像中的各个像素点进行聚类。
如图11所示,在一个实施例中,第二聚类模块1008包括:
初始隶属度确定模块1008A,用于根据每个像素点对应的亮度值分别确定每个像素点隶属于目标区域和非目标区域的初始隶属度。
初始中心值确定模块1008B,用于根据每个像素点的颜色特征和对应的亮度值分别确定目标区域和非目标区域的初始中心值。
第三聚类模块1008C,用于根据初始隶属度和初始中心值以及每个像素点的颜色特征和对应的亮度值对原始目标图像中的像素点进行聚类得到每个像素点隶属于目标区域的概率。
如图12所示,在一个实施例中,模糊处理模块1006包括:
模糊半径获取模块1006A,用于获取预设的模糊半径,根据模糊半径对填充后的图像进行模糊处理。
亮度值计算模块1006B,用于计算进行模糊处理后的图像中每个像素点对应的亮度值。
在一个实施例中,分割模块还用于根据聚类得到的每个像素点隶属于目标区域的概率进行区域的分割,其中,将大于预设概率值的像素点划分为目标区域,将小于或等于预设概率值的像素点划分为非目标区域。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流
程,是可以通过计算机程序来指令相关的硬件来完成,该计算机程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,前述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random Access Memory,RAM)等。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。
Claims (15)
- 一种图像分割的方法,包括:处理器获取存储在内存中的待分割的原始目标图像;提取所述原始目标图像中待分割区域的几何特征;获取所述原始目标图像中每个像素点的颜色特征;根据提取的所述几何特征和所述每个像素点的颜色特征对原始目标图像中的各个像素点进行聚类;及根据聚类的结果将所述原始目标图像进行分割。
- 根据权利要求1所述的方法,其特征在于,所述根据提取的所述几何特征和所述每个像素点的颜色特征对原始目标图像中的各个像素点进行聚类包括:根据提取的所述几何特征确定所述原始目标图像中待分割的目标区域和非目标区域;将所述目标区域和所述非目标区域用亮度值不同的颜色进行填充以区分所述目标区域和所述非目标区域;将填充后的图像进行模糊处理,计算进行模糊处理后的图像中每个像素点对应的亮度值;根据所述每个像素点的颜色特征和对应的所述亮度值对所述原始目标图像中的各个像素点进行聚类。
- 根据权利要求2所述的方法,其特征在于,所述根据所述每个像素点的颜色特征和对应的所述亮度值对所述原始目标图像中的各个像素点进行聚类包括:根据所述每个像素点对应的亮度值分别确定每个像素点隶属于所述目标区域和所述非目标区域的初始隶属度;根据所述每个像素点的颜色特征和对应的所述亮度值分别确定目标区域和非目标区域的初始中心值;根据所述初始隶属度和所述初始中心值以及所述每个像素点的颜色特征 和对应的所述亮度值对原始目标图像中的像素点进行聚类得到每个像素点隶属于所述目标区域的概率。
- 根据权利要求2所述的方法,其特征在于,所述将填充后的图像进行模糊处理,获取进行模糊处理后的图像中每个像素点的亮度值包括:获取预设的模糊半径,根据所述模糊半径对填充后的图像进行模糊处理;计算进行所述模糊处理后的图像中每个像素点对应的亮度值。
- 根据权利要求1所述的方法,其特征在于,所述根据聚类的结果将所述原始目标图像进行分割包括:根据聚类得到的每个像素点隶属于所述目标区域的概率进行区域的分割,其中,将大于预设概率值的像素点划分为目标区域,将小于或等于预设概率值的像素点划分为非目标区域。
- 一种计算机设备,包括存储器和处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:获取待分割的原始目标图像;提取所述原始目标图像中待分割区域的几何特征;获取所述原始目标图像中每个像素点的颜色特征;根据提取的所述几何特征和所述每个像素点的颜色特征对原始目标图像中的各个像素点进行聚类;及根据聚类的结果将所述原始目标图像进行分割。
- 根据权利要求6所述的计算机设备,其特征在于,所述处理器所执行的根据提取的所述几何特征和所述每个像素点的颜色特征对原始目标图像中的各个像素点进行聚类的步骤,包括:根据提取的所述几何特征确定所述原始目标图像中待分割的目标区域和非目标区域;将所述目标区域和所述非目标区域用亮度值不同的颜色进行填充以区分所述目标区域和所述非目标区域;将填充后的图像进行模糊处理,计算进行模糊处理后的图像中每个像素点对应的亮度值;根据所述每个像素点的颜色特征和对应的所述亮度值对所述原始目标图像中的各个像素点进行聚类。
- 根据权利要求7所述的计算机设备,其特征在于,所述处理器所执行的根据所述每个像素点的颜色特征和对应的所述亮度值对所述原始目标图像中的各个像素点进行聚类的步骤,包括:根据所述每个像素点对应的亮度值分别确定每个像素点隶属于所述目标区域和所述非目标区域的初始隶属度;根据所述每个像素点的颜色特征和对应的所述亮度值分别确定目标区域和非目标区域的初始中心值;根据所述初始隶属度和所述初始中心值以及所述每个像素点的颜色特征和对应的所述亮度值对原始目标图像中的像素点进行聚类得到每个像素点隶属于所述目标区域的概率。
- 根据权利要求7所述的计算机设备,其特征在于,所述处理器所执行的将填充后的图像进行模糊处理,获取进行模糊处理后的图像中每个像素点的亮度值的步骤,包括:获取预设的模糊半径,根据所述模糊半径对填充后的图像进行模糊处理;计算进行所述模糊处理后的图像中每个像素点对应的亮度值。
- 根据权利要求6所述的计算机设备,其特征在于,所述处理器所执行的根据聚类的结果将所述原始目标图像进行分割的步骤,包括:根据聚类得到的每个像素点隶属于所述目标区域的概率进行区域的分割,其中,将大于预设概率值的像素点划分为目标区域,将小于或等于预设概率值的像素点划分为非目标区域。
- 一个或多个存储有计算机可读指令的计算机可读非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:获取待分割的原始目标图像;提取所述原始目标图像中待分割区域的几何特征;获取所述原始目标图像中每个像素点的颜色特征;根据提取的所述几何特征和所述每个像素点的颜色特征对原始目标图像中的各个像素点进行聚类;及根据聚类的结果将所述原始目标图像进行分割。
- 根据权利要求11所述的存储介质,其特征在于,所述处理器所执行的根据提取的所述几何特征和所述每个像素点的颜色特征对原始目标图像中的各个像素点进行聚类的步骤,包括:根据提取的所述几何特征确定所述原始目标图像中待分割的目标区域和非目标区域;将所述目标区域和所述非目标区域用亮度值不同的颜色进行填充以区分所述目标区域和所述非目标区域;将填充后的图像进行模糊处理,计算进行模糊处理后的图像中每个像素点对应的亮度值;及根据所述每个像素点的颜色特征和对应的所述亮度值对所述原始目标图像中的各个像素点进行聚类。
- 根据权利要求12所述的存储介质,其特征在于,所述处理器所执行的根据所述每个像素点的颜色特征和对应的所述亮度值对所述原始目标图像中的各个像素点进行聚类的步骤,包括:根据所述每个像素点对应的亮度值分别确定每个像素点隶属于所述目标区域和所述非目标区域的初始隶属度;根据所述每个像素点的颜色特征和对应的所述亮度值分别确定目标区域和非目标区域的初始中心值;根据所述初始隶属度和所述初始中心值以及所述每个像素点的颜色特征和对应的所述亮度值对原始目标图像中的像素点进行聚类得到每个像素点隶属于所述目标区域的概率。
- 根据权利要求12所述的存储介质,其特征在于,所述处理器所执行的将填充后的图像进行模糊处理,获取进行模糊处理后的图像中每个像素点的亮度值步骤,包括:获取预设的模糊半径,根据所述模糊半径对填充后的图像进行模糊处理;计算进行所述模糊处理后的图像中每个像素点对应的亮度值。
- 根据权利要求11所述的存储介质,其特征在于,所述处理器所执行的根据聚类的结果将所述原始目标图像进行分割的步骤,包括:根据聚类得到的每个像素点隶属于所述目标区域的概率进行区域的分割,其中,将大于预设概率值的像素点划分为目标区域,将小于或等于预设概率值的像素点划分为非目标区域。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610702051.8 | 2016-08-22 | ||
CN201610702051.8A CN106340023B (zh) | 2016-08-22 | 2016-08-22 | 图像分割的方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018036462A1 true WO2018036462A1 (zh) | 2018-03-01 |
Family
ID=57824596
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/098417 WO2018036462A1 (zh) | 2016-08-22 | 2017-08-22 | 图像分割的方法、计算机设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106340023B (zh) |
WO (1) | WO2018036462A1 (zh) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109147011A (zh) * | 2018-08-27 | 2019-01-04 | 平安科技(深圳)有限公司 | 车牌图像生成方法、装置、计算机设备及存储介质 |
CN111026641A (zh) * | 2019-11-14 | 2020-04-17 | 北京云聚智慧科技有限公司 | 一种图片比较方法和电子设备 |
CN111325691A (zh) * | 2020-02-20 | 2020-06-23 | Oppo广东移动通信有限公司 | 图像校正方法、装置、电子设备和计算机可读存储介质 |
CN112308938A (zh) * | 2019-07-30 | 2021-02-02 | 西安诺瓦星云科技股份有限公司 | 图像处理方法和图像处理装置 |
CN113012188A (zh) * | 2021-03-23 | 2021-06-22 | 影石创新科技股份有限公司 | 图像融合方法、装置、计算机设备和存储介质 |
CN113344961A (zh) * | 2021-06-01 | 2021-09-03 | 中国平安人寿保险股份有限公司 | 图像背景分割方法、装置、计算设备及存储介质 |
CN114298937A (zh) * | 2021-12-29 | 2022-04-08 | 深圳软牛科技有限公司 | 一种jpeg照片的修复方法、装置及相关组件 |
CN114495236A (zh) * | 2022-02-11 | 2022-05-13 | 北京百度网讯科技有限公司 | 图像分割方法、装置、设备、介质及程序产品 |
CN115439846A (zh) * | 2022-08-09 | 2022-12-06 | 北京邮电大学 | 图像的分割方法、装置、电子设备及介质 |
CN116152562A (zh) * | 2023-02-23 | 2023-05-23 | 北京朗视仪器股份有限公司 | 一种彩色结构光图像快速颜色分类方法及系统 |
CN116563326A (zh) * | 2023-04-28 | 2023-08-08 | 江西绿萌科技控股有限公司 | 一种面向果蔬分选设备的多目标图像分割方法 |
CN117765008A (zh) * | 2023-12-26 | 2024-03-26 | 西北核技术研究所 | 一种基于自适应特征感知的图像分割方法 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106340023B (zh) * | 2016-08-22 | 2019-03-05 | 腾讯科技(深圳)有限公司 | 图像分割的方法和装置 |
CN110290426B (zh) * | 2019-06-24 | 2022-04-19 | 腾讯科技(深圳)有限公司 | 展示资源的方法、装置、设备及存储介质 |
CN110852938B (zh) * | 2019-10-28 | 2024-03-19 | 腾讯科技(深圳)有限公司 | 一种展示图片生成方法、装置及存储介质 |
CN110910400A (zh) * | 2019-10-29 | 2020-03-24 | 北京三快在线科技有限公司 | 图像处理方法、装置、存储介质及电子设备 |
CN111079637B (zh) * | 2019-12-12 | 2023-09-08 | 武汉轻工大学 | 田间图像中分割油菜花的方法、装置、设备及存储介质 |
CN112991357B (zh) * | 2019-12-18 | 2023-04-18 | 中国船舶集团有限公司第七一一研究所 | 图像分割方法、系统、计算机设备、可读存储介质与船舶 |
CN111667553A (zh) * | 2020-06-08 | 2020-09-15 | 北京有竹居网络技术有限公司 | 头像像素化的面部颜色填充方法、装置及电子设备 |
WO2021253373A1 (en) * | 2020-06-19 | 2021-12-23 | Alibaba Group Holding Limited | Probabilistic geometric partitioning in video coding |
WO2023108444A1 (zh) * | 2021-12-14 | 2023-06-22 | 深圳传音控股股份有限公司 | 图像处理方法、智能终端及存储介质 |
CN116563312B (zh) * | 2023-07-11 | 2023-09-12 | 山东古天电子科技有限公司 | 一种用于双屏机显示图像分割方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101925916A (zh) * | 2007-11-21 | 2010-12-22 | 格斯图尔泰克股份有限公司 | 媒体偏好 |
CN202110564U (zh) * | 2011-06-24 | 2012-01-11 | 华南理工大学 | 结合视频通道的智能家居语音控制系统 |
CN103034872A (zh) * | 2012-12-26 | 2013-04-10 | 四川农业大学 | 基于颜色和模糊聚类算法的农田害虫识别方法 |
CN104134080A (zh) * | 2014-08-01 | 2014-11-05 | 重庆大学 | 一种道路路基塌陷和边坡坍塌的自动检测方法及系统 |
CN105469356A (zh) * | 2015-11-23 | 2016-04-06 | 小米科技有限责任公司 | 人脸图像处理方法及装置 |
CN106340023A (zh) * | 2016-08-22 | 2017-01-18 | 腾讯科技(深圳)有限公司 | 图像分割的方法和装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103020975A (zh) * | 2012-12-29 | 2013-04-03 | 北方工业大学 | 一种结合多源遥感图像特征的码头和船舶分割方法 |
CN103903257B (zh) * | 2014-02-27 | 2017-01-18 | 西安电子科技大学 | 基于几何块间隔共生特征和语义信息的图像分割方法 |
JP6529246B2 (ja) * | 2014-11-28 | 2019-06-12 | キヤノン株式会社 | 特徴抽出方法、特徴抽出装置、及びプログラム |
CN105869175A (zh) * | 2016-04-21 | 2016-08-17 | 北京邮电大学 | 一种图像分割方法及系统 |
-
2016
- 2016-08-22 CN CN201610702051.8A patent/CN106340023B/zh active Active
-
2017
- 2017-08-22 WO PCT/CN2017/098417 patent/WO2018036462A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101925916A (zh) * | 2007-11-21 | 2010-12-22 | 格斯图尔泰克股份有限公司 | 媒体偏好 |
CN202110564U (zh) * | 2011-06-24 | 2012-01-11 | 华南理工大学 | 结合视频通道的智能家居语音控制系统 |
CN103034872A (zh) * | 2012-12-26 | 2013-04-10 | 四川农业大学 | 基于颜色和模糊聚类算法的农田害虫识别方法 |
CN104134080A (zh) * | 2014-08-01 | 2014-11-05 | 重庆大学 | 一种道路路基塌陷和边坡坍塌的自动检测方法及系统 |
CN105469356A (zh) * | 2015-11-23 | 2016-04-06 | 小米科技有限责任公司 | 人脸图像处理方法及装置 |
CN106340023A (zh) * | 2016-08-22 | 2017-01-18 | 腾讯科技(深圳)有限公司 | 图像分割的方法和装置 |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109147011B (zh) * | 2018-08-27 | 2023-11-14 | 平安科技(深圳)有限公司 | 车牌图像生成方法、装置、计算机设备及存储介质 |
CN109147011A (zh) * | 2018-08-27 | 2019-01-04 | 平安科技(深圳)有限公司 | 车牌图像生成方法、装置、计算机设备及存储介质 |
CN112308938A (zh) * | 2019-07-30 | 2021-02-02 | 西安诺瓦星云科技股份有限公司 | 图像处理方法和图像处理装置 |
CN112308938B (zh) * | 2019-07-30 | 2024-06-04 | 西安诺瓦星云科技股份有限公司 | 图像处理方法和图像处理装置 |
CN111026641B (zh) * | 2019-11-14 | 2023-06-20 | 北京云聚智慧科技有限公司 | 一种图片比较方法和电子设备 |
CN111026641A (zh) * | 2019-11-14 | 2020-04-17 | 北京云聚智慧科技有限公司 | 一种图片比较方法和电子设备 |
CN111325691A (zh) * | 2020-02-20 | 2020-06-23 | Oppo广东移动通信有限公司 | 图像校正方法、装置、电子设备和计算机可读存储介质 |
CN111325691B (zh) * | 2020-02-20 | 2023-11-10 | Oppo广东移动通信有限公司 | 图像校正方法、装置、电子设备和计算机可读存储介质 |
CN113012188A (zh) * | 2021-03-23 | 2021-06-22 | 影石创新科技股份有限公司 | 图像融合方法、装置、计算机设备和存储介质 |
CN113344961B (zh) * | 2021-06-01 | 2023-09-26 | 中国平安人寿保险股份有限公司 | 图像背景分割方法、装置、计算设备及存储介质 |
CN113344961A (zh) * | 2021-06-01 | 2021-09-03 | 中国平安人寿保险股份有限公司 | 图像背景分割方法、装置、计算设备及存储介质 |
CN114298937A (zh) * | 2021-12-29 | 2022-04-08 | 深圳软牛科技有限公司 | 一种jpeg照片的修复方法、装置及相关组件 |
CN114298937B (zh) * | 2021-12-29 | 2024-05-17 | 深圳软牛科技集团股份有限公司 | 一种jpeg照片的修复方法、装置及相关组件 |
CN114495236A (zh) * | 2022-02-11 | 2022-05-13 | 北京百度网讯科技有限公司 | 图像分割方法、装置、设备、介质及程序产品 |
CN115439846A (zh) * | 2022-08-09 | 2022-12-06 | 北京邮电大学 | 图像的分割方法、装置、电子设备及介质 |
CN116152562A (zh) * | 2023-02-23 | 2023-05-23 | 北京朗视仪器股份有限公司 | 一种彩色结构光图像快速颜色分类方法及系统 |
CN116563326A (zh) * | 2023-04-28 | 2023-08-08 | 江西绿萌科技控股有限公司 | 一种面向果蔬分选设备的多目标图像分割方法 |
CN117765008A (zh) * | 2023-12-26 | 2024-03-26 | 西北核技术研究所 | 一种基于自适应特征感知的图像分割方法 |
Also Published As
Publication number | Publication date |
---|---|
CN106340023B (zh) | 2019-03-05 |
CN106340023A (zh) | 2017-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018036462A1 (zh) | 图像分割的方法、计算机设备及存储介质 | |
WO2021164228A1 (zh) | 一种图像数据的增广策略选取方法及系统 | |
KR102469295B1 (ko) | 깊이를 사용한 비디오 배경 제거 | |
CN106022221B (zh) | 一种图像处理方法及处理系统 | |
WO2020199931A1 (zh) | 人脸关键点检测方法及装置、存储介质和电子设备 | |
EP3338217B1 (en) | Feature detection and masking in images based on color distributions | |
US9547908B1 (en) | Feature mask determination for images | |
Ban et al. | Face detection based on skin color likelihood | |
CN107679466B (zh) | 信息输出方法和装置 | |
WO2023137914A1 (zh) | 图像处理方法、装置、电子设备及存储介质 | |
CN109410168B (zh) | 用于确定图像中的子图块类别的卷积神经网络的建模方法 | |
WO2019232834A1 (zh) | 人脸亮度调整方法、装置、计算机设备及存储介质 | |
CN108537239B (zh) | 一种图像显著性目标检测的方法 | |
US20170213112A1 (en) | Utilizing deep learning for automatic digital image segmentation and stylization | |
US20140153832A1 (en) | Facial expression editing in images based on collections of images | |
CN110569756A (zh) | 人脸识别模型构建方法、识别方法、设备和存储介质 | |
WO2020248848A1 (zh) | 智能化异常细胞判断方法、装置及计算机可读存储介质 | |
CN109460774B (zh) | 一种基于改进的卷积神经网络的鸟类识别方法 | |
CN108694719B (zh) | 图像输出方法和装置 | |
WO2019232850A1 (zh) | 手写汉字图像识别方法、装置、计算机设备及存储介质 | |
CN111899247A (zh) | 脉络膜血管的管腔区域识别方法、装置、设备及介质 | |
US20230059499A1 (en) | Image processing system, image processing method, and non-transitory computer readable medium | |
WO2022199710A1 (zh) | 图像融合方法、装置、计算机设备和存储介质 | |
WO2022206729A1 (zh) | 视频封面选择方法、装置、计算机设备和存储介质 | |
WO2022227547A1 (zh) | 用于图像处理的方法、装置、电子设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17842885 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17842885 Country of ref document: EP Kind code of ref document: A1 |