CN110838123A - Segmentation method for illumination highlight area of indoor design effect image - Google Patents

Segmentation method for illumination highlight area of indoor design effect image Download PDF

Info

Publication number
CN110838123A
CN110838123A CN201911074569.1A CN201911074569A CN110838123A CN 110838123 A CN110838123 A CN 110838123A CN 201911074569 A CN201911074569 A CN 201911074569A CN 110838123 A CN110838123 A CN 110838123A
Authority
CN
China
Prior art keywords
vertex
clustering
cluster
effect image
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911074569.1A
Other languages
Chinese (zh)
Other versions
CN110838123B (en
Inventor
万倩倩
王庆利
苏亮亮
郑志华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Zhishan Intelligent Science And Technology Research Institute Co Ltd
Original Assignee
Nanjing Zhishan Intelligent Science And Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Zhishan Intelligent Science And Technology Research Institute Co Ltd filed Critical Nanjing Zhishan Intelligent Science And Technology Research Institute Co Ltd
Priority to CN201911074569.1A priority Critical patent/CN110838123B/en
Publication of CN110838123A publication Critical patent/CN110838123A/en
Application granted granted Critical
Publication of CN110838123B publication Critical patent/CN110838123B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The invention provides a method for segmenting an indoor design effect image illumination highlight area, which comprises the following steps of: converting the effect image in the RGB format into a five-dimensional vector; uniformly distributing all initial clustering vertexes, calculating CIELab color space similarity, and combining surrounding similar clustering vertexes to a center to serve as a new clustering vertex; calculating the pixel similarity of each clustering vertex, and assigning the vertex number label of the most similar clustering vertex to the pixel point; the connected regions labeled with the same vertex number constitute a new superpixel. The segmentation method effectively arranges the positions of the superpixels and determines the number of the superpixels by a biased clustering method, manual operation of superpixel segmentation is not needed, the pressure of manual operation is reduced, the superpixel segmentation effect is good, and the image enhancement processing of each superpixel is facilitated.

Description

Segmentation method for illumination highlight area of indoor design effect image
Technical Field
The invention relates to an indoor design effect image segmentation method, in particular to a segmentation method of an indoor design effect image illumination highlight area.
Background
The effect image of the indoor design is derived from a three-dimensional image drawn by professional technicians by using software such as 3ds max, AutoCAD or Photoshop and the like, and the image in the three-dimensional scene is converted into a two-dimensional image by rendering by using a computer graphics generation technology. The effect image generated by conversion can meet the expected effect only by further computer vision processing, and the experience of the user is closer to the real effect. The high-brightness area is generated by outdoor illumination or lamplight, and because indoor illumination is uneven, the shapes of light sources are different, the colors of lamplight of different materials are different, the shapes of the generated high-brightness areas are different, and the brightness is different. In practice of enhancing an effect image, it is found that because the brightness of an illuminated highlight area is high, direct image enhancement without distinction often causes highlight light spots, and the output effect of the effect image is affected. Therefore, different parts of the same effect image need to be respectively subjected to image enhancement, especially the image containing the obvious illumination highlight area. Therefore, it is necessary to design a method for segmenting the illumination highlight area of the indoor design effect image, which can segment the illumination highlight area in the effect image, so as to facilitate the respective image enhancement processing of each super pixel.
Disclosure of Invention
The invention aims to: the method for segmenting the illumination highlight area of the indoor design effect image can segment the illumination highlight area in the effect image, so that each super pixel can conveniently and respectively perform image enhancement processing.
In order to achieve the above object, the present invention provides a method for segmenting an illumination highlight region of an indoor design effect image, comprising the following steps:
step 1, converting an effect image in an RGB format into a five-dimensional vector by using a color model of a CIELab color space;
step 2, uniformly distributing each initial clustering vertex in the whole image of the effect image according to the initial number of the super pixels, distributing a unique vertex number label for each clustering vertex, calculating the CIELab color space similarity of each clustering vertex and the surrounding clustering vertices, if the clustering vertices are judged to be similar to the surrounding clustering vertices, combining the surrounding clustering vertices to the center as new clustering vertices, and traversing all the clustering vertices to perform similar combination clustering;
step 3, calculating the pixel similarity of each pixel point in the neighborhood around each clustering vertex and the clustering vertex closest to the pixel point, and assigning the vertex number label of the most similar clustering vertex to the pixel point;
and 4, repeatedly executing the step 3 until the vertex number labels of all the pixel points tend to be stable, and forming new superpixels by the connected regions with the same vertex number labels.
Further, in step 1, the RGB-format effect image is converted into a five-dimensional vector [ L, a, b, X, Y ], where L, a and b correspond to the values of three channels of the CIELab color space, and X and Y represent the spatial coordinates of the pixel point.
Further, in step 2, if the initial number of superpixels is N, the number K of initial cluster vertices is:
Figure BDA0002262016520000021
and then uniformly distributing K initial clustering vertexes in the effect image.
Further, in step 2, when the K initial clustering vertexes are uniformly distributed in the effect image, the gradient value of the pixels in the neighborhood around each initial clustering vertex is further calculated, and then the pixel point at the position with the minimum gradient in the neighborhood is replaced to be a new initial clustering vertex.
Further, in step 2, when calculating the CIELab color space similarity between each cluster vertex and the surrounding cluster vertices, the calculation formula is:
Figure BDA0002262016520000022
in the formula (d)labIs a pixelColor similarity between dots,. lk、akAnd bkThree channel values, l, of the CIELab color space for clustering vertex Ai、aiAnd biThree channel values of the CIELab color space of a certain cluster vertex B around the cluster vertex A.
Further, in step 2, when the similarity is determined, if the minimum value of the similarity values of a cluster vertex B around the cluster vertex a and the cluster vertices around the cluster vertex B is the color similarity value d of the cluster vertex a and the cluster vertex BlabAnd if the cluster vertexes A are similar, merging the cluster vertexes in the four directions around the cluster vertex A into the cluster vertex A to serve as a new cluster vertex, and deleting the vertex number labels of the four merged cluster vertexes.
Further, in step 2, when traversing all the clustering vertexes for similar merging clustering, recording the times of participation of each clustering vertex in similar clustering merging, if a certain clustering vertex has passed through two rounds of similar merging clustering, setting a clustering upper limit label for the clustering vertex, and in the process of similar merging and clustering, firstly judging whether the clustering vertex is set with the clustering upper limit label, if so, abandoning the similar clustering merging operation of the clustering vertex.
Further, in step 3, the calculation formula of the pixel similarity is as follows:
in the formula (3), DiThe larger the value is, the higher the similarity of the two pixel points is, dlabM is a balance parameter for measuring the color value and the proportion of space information in similarity measurement, dxyThe space distance between the pixel points is set, and S is 2 times of the distance between the initial clustering vertexes; dxyThe calculation formula of (2) is as follows:
Figure BDA0002262016520000031
in the formula (4), xkAnd ykIs the coordinate, x, of a pixel pointiAnd yiIs the coordinate of another pixel point; the calculation formula of S is:
Figure BDA0002262016520000032
in equation (5), N is the initial number of superpixels, and K is the number of initial cluster vertices.
Further, in step 4, after each new superpixel is constructed, each new superpixel is examined and superpixels having a size less than one quarter of the average superpixel size are merged with neighboring superpixels.
Further, in step 4, when each new super-pixel is checked, if there are multiple super-pixels with the same cluster vertex label, the super-pixel with the largest size is retained, and the rest super-pixels are merged with the adjacent super-pixels.
The invention has the beneficial effects that: the super-pixel positions are effectively distributed and the super-pixel number is determined by the method of the segregation type, the super-pixel segmentation is not required to be operated manually, the pressure of manual operation is reduced, the super-pixel segmentation effect is good, and the super-pixels can be used for respectively performing image enhancement processing.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a schematic diagram of merging centers of four adjacent clustering vertexes according to the present invention;
FIG. 3 is an experimental effect diagram of an illumination highlight area of a segmentation effect image by a SLIC method in the prior art;
FIG. 4 is an experimental effect diagram of the segmentation effect image in the high illumination area according to the segmentation method of the present invention.
Detailed Description
The technical solution of the present invention is described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the embodiments.
Example 1:
as shown in fig. 1, the method for segmenting the highlight area of the indoor design effect image illumination disclosed by the invention comprises the following steps:
step 1, converting an effect image in an RGB format into a five-dimensional vector by using a color model of a CIELab color space;
step 2, uniformly distributing each initial clustering vertex in the whole image of the effect image according to the initial number of the super pixels, distributing a unique vertex number label for each clustering vertex, calculating the CIELab color space similarity of each clustering vertex and the surrounding clustering vertices, if the clustering vertices are judged to be similar to the surrounding clustering vertices, combining the surrounding clustering vertices to the center as new clustering vertices, and traversing all the clustering vertices to perform similar combination clustering;
step 3, calculating the pixel similarity between each pixel point in the neighborhood around each clustering vertex and the clustering vertex closest to the pixel point, setting the neighborhood range around the clustering vertex to be a 2S multiplied by 2S pixel range, wherein S is 2 times of the distance between the initial clustering vertices, assigning the vertex number label of the most similar clustering vertex to the pixel point, and setting the neighborhood range to be the 2S multiplied by 2S pixel range, so that the computation amount is greatly reduced;
and 4, repeatedly executing the step 3 until the vertex number labels of all the pixel points tend to be stable, forming a new super pixel by the connected region with the same vertex number label, determining the specific repeated execution according to the precision requirement of image segmentation, and selecting the repeated execution for 10 times in the process of segmenting the effect image illumination highlight region, so that a relatively ideal effect can be achieved.
The super-pixel positions are effectively distributed and the super-pixel number is determined by the method of the segregation type, the super-pixel segmentation is not required to be operated manually, the pressure of manual operation is reduced, the super-pixel segmentation effect is good, and the super-pixels can be used for respectively performing image enhancement processing.
Further, in step 1, the RGB-format effect image is converted into a five-dimensional vector [ L, a, b, X, Y ], where L, a and b correspond to the values of three channels of the CIELab color space, and X and Y represent the spatial coordinates of the pixel point. The effect graph in the RGB format cannot be directly converted into the CIELab format, and the effect graph in the RGB format needs to be converted into the XYZ format first and then into the CIELab color space, and the specific conversion directly adopts the existing conversion algorithm.
Further, in step 2, if the initial number of superpixels is N, the number K of initial cluster vertices is:
Figure BDA0002262016520000041
and then uniformly distributing K initial clustering vertexes in the effect image.
Further, in step 2, when the K initial clustering vertexes are uniformly distributed in the effect image, gradient values of pixels in neighborhoods around each initial clustering vertex are further calculated, 3 × 3 pixel regions are selected according to the size of the surrounding regions, and then the pixel point at the minimum gradient position in the neighborhood is replaced to be a new initial clustering vertex. Therefore, the vertex can be prevented from falling on the contour boundary of an object in the image, the subsequent clustering effect is ensured, the super-pixel vertex of the next clustering step is screened by adopting a biased clustering method, the super-pixel clustering vertexes are unevenly distributed, the clustering vertexes are relatively densely distributed at the part with higher image density, and the part with lower image density is relatively sparsely distributed.
Further, in step 2, when calculating the CIELab color space similarity between each cluster vertex and the surrounding cluster vertices, the calculation formula is:
Figure BDA0002262016520000042
in the formula (d)labIs the color similarity between pixels,. lk、akAnd bkThree channel values, l, of the CIELab color space for clustering vertex Ai、aiAnd biThree channel values of the CIELab color space of a certain cluster vertex B around the cluster vertex A.
Further, in step 2, when the similarity is determined, if a certain cluster vertex around the cluster vertex a is determinedThe minimum value of the similarity values of the B and the clustering vertexes around the clustering vertex B is the color similarity value d of the clustering vertex A and the clustering vertex BlabIf the cluster vertexes are similar to each other, the cluster vertexes B and A in the four directions are merged to the cluster vertexes A as new cluster vertexes, and vertex number labels of the four merged cluster vertexes are deleted, as shown in FIG. 2.
Further, in step 2, when traversing all the clustering vertexes for similar merging clustering, recording the times of participation of each clustering vertex in similar clustering merging, if a certain clustering vertex has been subjected to similar merging clustering twice, setting a clustering upper limit label for the clustering vertex, and in the process of similar merging and clustering, firstly judging whether the clustering vertex is set with the clustering upper limit label, if so, abandoning the similar clustering merging operation of the clustering vertex.
Further, in step 3, the calculation formula of the pixel similarity is as follows:
Figure BDA0002262016520000051
in the formula (3), DiThe larger the value is, the higher the similarity of the two pixel points is, dlabM is a balance parameter for measuring the color value and the proportion of space information in similarity measurement, dxyThe space distance between the pixel points is set, and S is 2 times of the distance between the initial clustering vertexes; dxyThe calculation formula of (2) is as follows:
Figure BDA0002262016520000052
in the formula (4), xkAnd ykIs the coordinate, x, of a pixel pointiAnd yiIs the coordinate of another pixel point; the calculation formula of S is:
Figure BDA0002262016520000053
in equation (5), N is the initial number of superpixels, and K is the number of initial cluster vertices.
Further, in the process of forming each new super pixel, it may happen that a super pixel including other super pixels, having a too small size, or having the same vertex number label is cut into a plurality of discontinuous super pixels, so that each new super pixel in step 4 is optimized, each new super pixel is checked from left to right and from top to bottom, and the super pixel having a size smaller than one fourth of the average super pixel size is merged with the adjacent super pixel; if there are multiple superpixels with the same cluster vertex label, the superpixel with the largest size is reserved, and the rest superpixels are merged with the adjacent superpixels.
As shown in fig. 3 and 4, the segmentation method of the present invention is compared with the segmentation effect of the SLIC method in the illumination highlight area, and in order to ensure the consistency of the experimental conditions of the two methods, the number of superpixels of the SLIC method is set to 200 in combination with the optimal number of superpixels of the SLIC method identified in the related literature. In the aspect of experimental time, the image segmentation speed by adopting the SLIC method is slightly higher than that of the segmentation method, because the method for performing vertex clustering once by using the clustering method is additionally arranged in the segmentation method, but only clustering vertexes are divided when the clustering method is in the clustering process, the time delay can be ignored. In addition, the step of inputting the clustering quantity is reduced in the running process, the actual total running time is shortened, and the running efficiency is improved.
The experiment effect diagram comprises a plurality of illumination highlight areas in different forms, and the oval halo and the strip-shaped lamp strip positioned on the ceiling are particularly obvious. Fig. 3 is an effect diagram of segmenting an effect image illumination highlight area by the SLIC method, and fig. 4 is an effect diagram of segmenting an effect image illumination highlight area by the segmentation method of the present invention. As can be seen from the results of the segmentation experiments in the figure, the superpixels segmented by the SLIC method have the advantages of regular shape and uniform size, and have the disadvantage that the boundary lines of partial real objects generate deviation under the regular shape; the shape of the superpixel generated by adopting the segmentation method is not regular and is not uniform enough, but the shape of a real object is obviously segmented by adopting the method; aiming at the high-brightness area caused by the strip-shaped lamp strip, the SLIC method is adopted to not segment the outline of the area, and the segmentation method of the invention obviously segments the form of the area; aiming at an elliptical illumination highlight area at the position of the pendant lamp, the outline of the area segmented by the pendant lamp is in a petal shape, and under-segmentation and over-segmentation are obvious; the outline shape divided by the dividing method is a more regular ellipse, and the shape is obviously superior to the shape of the outline.
As noted above, while the present invention has been shown and described with reference to certain preferred embodiments, it is not to be construed as limited thereto. Various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A segmentation method for an indoor design effect image illumination highlight area is characterized by comprising the following steps:
step 1, converting an effect image in an RGB format into a five-dimensional vector by using a color model of a CIELab color space;
step 2, uniformly distributing each initial clustering vertex in the whole image of the effect image according to the initial number of the super pixels, distributing a unique vertex number label for each clustering vertex, calculating the CIELab color space similarity of each clustering vertex and the surrounding clustering vertices, if the clustering vertices are judged to be similar to the surrounding clustering vertices, combining the surrounding clustering vertices to the center as new clustering vertices, and traversing all the clustering vertices to perform similar combination clustering;
step 3, calculating the pixel similarity of each pixel point in the neighborhood around each clustering vertex and the clustering vertex closest to the pixel point, and assigning the vertex number label of the most similar clustering vertex to the pixel point;
and 4, repeatedly executing the step 3 until the vertex number labels of all the pixel points tend to be stable, and setting the connected region of the same vertex number label as a new superpixel.
2. The method of claim 1, wherein in step 1, the RGB-format effect image is converted into a five-dimensional vector [ L, a, b, X, Y ], where L, a and b correspond to the values of three channels in the CIELab color space, and X and Y represent the spatial coordinates of the pixel point.
3. The method for segmenting highlight areas in indoor design effect images according to claim 1, wherein in step 2, if the initial number of superpixels is N, the number K of initial cluster vertices is:
Figure FDA0002262016510000011
and then uniformly distributing K initial clustering vertexes in the effect image.
4. The method for segmenting highlight areas for indoor design effect image illumination according to claim 1, characterized in that in step 2, when K initial clustering vertexes are uniformly laid out in the effect image, gradient values of pixels in neighborhoods around each initial clustering vertex are further calculated, and then the pixel point at the minimum gradient position in the neighborhoods is replaced with a new initial clustering vertex.
5. The method for segmenting the highlight area of the indoor design effect image illumination according to claim 1, wherein in the step 2, when calculating the CIELab color space similarity between each cluster vertex and the surrounding cluster vertices, the calculation formula is:
Figure FDA0002262016510000012
in the formula (d)labIs the color similarity between pixels,. lk、akAndbkthree channel values, l, of the CIELab color space for clustering vertex Ai、aiAnd biThree channel values of the CIELab color space of a certain cluster vertex B around the cluster vertex A.
6. The method according to claim 5, wherein in the step 2, when the similarity is determined, if the minimum value of the similarity between a cluster vertex B around the cluster vertex A and the cluster vertex B around the cluster vertex B is the color similarity d between the cluster vertex A and the cluster vertex BlabAnd if the cluster vertexes A are similar, merging the cluster vertexes in the four directions around the cluster vertex A into the cluster vertex A to serve as a new cluster vertex, and deleting the vertex number labels of the four merged cluster vertexes.
7. The method for segmenting highlight areas for indoor design effect images according to claim 1, characterized in that in step 2, when traversing all cluster vertexes for similar merging clustering, the times of participation of each cluster vertex in similar clustering are recorded, if a certain cluster vertex has undergone two similar merging clusters, a cluster upper limit label is set for the cluster vertex, and in the process of similar merging clustering, it is first determined whether the cluster vertex has set a cluster upper limit label, and if so, the similar clustering merging operation of the cluster vertex is abandoned.
8. The method for segmenting the highlight area of the indoor design effect image according to claim 1, wherein in the step 3, the calculation formula of the pixel similarity is as follows:
Figure FDA0002262016510000021
in the formula (3), DiTwo pixels being largerThe higher the similarity, dlabM is a balance parameter for measuring the color value and the proportion of space information in similarity measurement, dxyThe space distance between the pixel points is set, and S is 2 times of the distance between the initial clustering vertexes; dxyThe calculation formula of (2) is as follows:
in the formula (4), xkAnd ykIs the coordinate, x, of a pixel pointiAnd yiIs the coordinate of another pixel point; the calculation formula of S is:
Figure FDA0002262016510000023
in equation (5), N is the initial number of superpixels, and K is the number of initial cluster vertices.
9. The method of segmenting illuminated highlight regions for indoor design effects image of claim 1, wherein in step 4, after constructing each new superpixel, each new superpixel is examined and superpixels having a size less than one quarter of the average superpixel size are merged with neighboring superpixels.
10. The method of claim 9, wherein in step 4, when checking each new super pixel, if there are multiple super pixels with the same cluster vertex label, the super pixel with the largest size is retained, and the rest super pixels are merged with the neighboring super pixels.
CN201911074569.1A 2019-11-06 2019-11-06 Segmentation method for illumination highlight area of indoor design effect image Active CN110838123B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911074569.1A CN110838123B (en) 2019-11-06 2019-11-06 Segmentation method for illumination highlight area of indoor design effect image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911074569.1A CN110838123B (en) 2019-11-06 2019-11-06 Segmentation method for illumination highlight area of indoor design effect image

Publications (2)

Publication Number Publication Date
CN110838123A true CN110838123A (en) 2020-02-25
CN110838123B CN110838123B (en) 2022-02-11

Family

ID=69576423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911074569.1A Active CN110838123B (en) 2019-11-06 2019-11-06 Segmentation method for illumination highlight area of indoor design effect image

Country Status (1)

Country Link
CN (1) CN110838123B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292328A (en) * 2023-11-24 2023-12-26 山东新中鲁建设有限公司 Safety management and monitoring method and system for construction quality of assembled building

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793504A (en) * 2014-01-24 2014-05-14 北京理工大学 Cluster initial point selection method based on user preference and project properties
US20150187070A1 (en) * 2012-08-24 2015-07-02 Singapore Health Services Pte Ltd. Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation
CN106228188A (en) * 2016-07-22 2016-12-14 北京市商汤科技开发有限公司 Clustering method, device and electronic equipment
CN106570873A (en) * 2016-11-08 2017-04-19 江苏大学 Medical image segmentation method
CN106778821A (en) * 2016-11-25 2017-05-31 西安电子科技大学 Classification of Polarimetric SAR Image method based on SLIC and improved CNN
CN109389601A (en) * 2018-10-19 2019-02-26 山东大学 Color image superpixel segmentation method based on similitude between pixel
CN109522908A (en) * 2018-11-16 2019-03-26 董静 Image significance detection method based on area label fusion
CN110324617A (en) * 2019-05-16 2019-10-11 西安万像电子科技有限公司 Image processing method and device
CN110415208A (en) * 2019-06-10 2019-11-05 西安电子科技大学 A kind of adaptive targets detection method and its device, equipment, storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150187070A1 (en) * 2012-08-24 2015-07-02 Singapore Health Services Pte Ltd. Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation
CN103793504A (en) * 2014-01-24 2014-05-14 北京理工大学 Cluster initial point selection method based on user preference and project properties
CN106228188A (en) * 2016-07-22 2016-12-14 北京市商汤科技开发有限公司 Clustering method, device and electronic equipment
CN106570873A (en) * 2016-11-08 2017-04-19 江苏大学 Medical image segmentation method
CN106778821A (en) * 2016-11-25 2017-05-31 西安电子科技大学 Classification of Polarimetric SAR Image method based on SLIC and improved CNN
CN109389601A (en) * 2018-10-19 2019-02-26 山东大学 Color image superpixel segmentation method based on similitude between pixel
CN109522908A (en) * 2018-11-16 2019-03-26 董静 Image significance detection method based on area label fusion
CN110324617A (en) * 2019-05-16 2019-10-11 西安万像电子科技有限公司 Image processing method and device
CN110415208A (en) * 2019-06-10 2019-11-05 西安电子科技大学 A kind of adaptive targets detection method and its device, equipment, storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHUN-YAN HAN: "《Improved SLIC imagine segmentation algorithm based on K-means》", 《CLUSTER COMPUT》 *
李鹏等: "《应用视觉显著性的快速有偏聚类超像素算法》", 《西安交通大学学报》 *
杨艳等: "《优化加权核K-means聚类初始中心点的SLIC算法》", 《计算机科学与探索》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292328A (en) * 2023-11-24 2023-12-26 山东新中鲁建设有限公司 Safety management and monitoring method and system for construction quality of assembled building
CN117292328B (en) * 2023-11-24 2024-02-02 山东新中鲁建设有限公司 Safety management and monitoring method and system for construction quality of assembled building

Also Published As

Publication number Publication date
CN110838123B (en) 2022-02-11

Similar Documents

Publication Publication Date Title
CN108352083B (en) 2D image processing for stretching into 3D objects
CN109816669A (en) A kind of improvement Mask R-CNN image instance dividing method identifying power equipments defect
CN108510562B (en) Digital camouflage pattern generation method based on image fractal texture
KR20020039721A (en) Method and apparatus for sectioning image into a plurality of regions
CN110569859B (en) Color feature extraction method for clothing image
JP2000011138A (en) Color coordinate space structure and color quantizing method using color coordinate and color spreading
CN113052859A (en) Super-pixel segmentation method based on self-adaptive seed point density clustering
CN110838123B (en) Segmentation method for illumination highlight area of indoor design effect image
CN114386295B (en) Textile computer simulation method based on color separation and color change of colored spun yarns
JP6294700B2 (en) Image processing apparatus and image processing method
US9875555B2 (en) Partitioning an image
CN112365517A (en) Super-pixel segmentation method based on image color and density characteristics
KR100602739B1 (en) Semi-automatic field based image metamorphosis using recursive control-line matching
CN111046783A (en) Slope geological disaster boundary extraction method for improving watershed algorithm
US10115181B2 (en) Systems for automatically assembling tile maps and associated techniques
KR101098830B1 (en) Surface texture mapping apparatus and its method
CN110097500B (en) AI-based image lossless amplification method
CN109087371B (en) Method and system for controlling robot portrait
CN105427354B (en) Image vector expression based on plane set of blocks
CN116994003B (en) Two-dimensional rounded corner and bottom corner combined characteristic identification method for aviation structural part
JP2576336B2 (en) Image division method and apparatus
JP3055779B1 (en) Image division method
AU5262099A (en) Method and apparatus for segmenting images
CN117197275A (en) Terrain rendering method and device
KR970049862A (en) Image generating apparatus using image processing and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant