CN113139936B - Image segmentation processing method and device - Google Patents

Image segmentation processing method and device Download PDF

Info

Publication number
CN113139936B
CN113139936B CN202110349479.XA CN202110349479A CN113139936B CN 113139936 B CN113139936 B CN 113139936B CN 202110349479 A CN202110349479 A CN 202110349479A CN 113139936 B CN113139936 B CN 113139936B
Authority
CN
China
Prior art keywords
image
contour
clustering
pixel
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110349479.XA
Other languages
Chinese (zh)
Other versions
CN113139936A (en
Inventor
鲍俊芳
刘怀广
宋子逵
张立恒
项茹
蒋俊
任玉明
刘睿
陈昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Iron and Steel Co Ltd
Original Assignee
Wuhan Iron and Steel Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Iron and Steel Co Ltd filed Critical Wuhan Iron and Steel Co Ltd
Priority to CN202110349479.XA priority Critical patent/CN113139936B/en
Publication of CN113139936A publication Critical patent/CN113139936A/en
Application granted granted Critical
Publication of CN113139936B publication Critical patent/CN113139936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides an image segmentation processing method, which comprises the steps of firstly preprocessing an acquired optical tissue image of an object to be analyzed to obtain a preprocessed image of the object to be analyzed; then, carrying out self-adaptive binarization processing on each pixel based on the threshold value matched with the pixel to obtain a binarization image corresponding to the preprocessed image; then, determining a boundary contour of the binary image, determining a main body contour based on a relationship between contours of the boundary contour, segmenting a main body component image from the preprocessed image according to the main body contour, and finally, carrying out self-adaptive clustering processing on the main body component image based on a color space to obtain a color quantization result of the main body image; the method reduces the over-segmentation and under-segmentation conditions of the traditional image segmentation method, achieves the technical effect of reducing the air holes from being mistakenly divided into the air hole walls, and can improve the accuracy of classifying and identifying the optical tissues of the objects with anisotropic structures by applying the method.

Description

Image segmentation processing method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image segmentation processing method and apparatus.
Background
The traditional image segmentation method generally adopts manual segmentation, has the defects of time and labor consumption, and at present, image segmentation by using an image processing technology liberates manual work to a certain extent and ensures the segmentation accuracy to a certain extent.
In some images with unclear light and shade resolution, for example, optical tissue images of objects, although the light and shade difference between air holes and air hole walls of the objects in the images is obvious, in the optical tissue images of the objects, components of the object tissues have a certain overlapping relationship, in the prior art, the air holes of partial images are mistakenly divided into the air hole walls, so that the images are over-divided or under-divided, and different polarized lights have influence on the colors of the images, so that the light and shade difference is increased, and various tissues of the objects with anisotropic structures cannot be accurately divided and extracted.
Disclosure of Invention
The embodiment of the invention provides an image segmentation processing method and device, and solves the problems that in the related technology, when an optical tissue image and a non-optical tissue image of an object to be analyzed are segmented, the accuracy is not high, and the images are over-segmented and under-segmented.
In a first aspect, the present invention provides an image segmentation processing method according to an embodiment of the present invention, including: preprocessing the acquired optical tissue image of the object to be analyzed to obtain a preprocessed image of the object to be analyzed; performing self-adaptive binarization processing on the preprocessed image based on the threshold value matched with each pixel to obtain a binarization image corresponding to the preprocessed image; determining a boundary contour of the binary image, determining a main body contour based on a relationship between contours of the boundary contour, and segmenting a main body image from the preprocessed image according to the main body contour; and carrying out self-adaptive clustering processing on the main image based on the color space to obtain a color quantization result of the main image.
Preferably, the preprocessing the acquired optical tissue image of the object to be analyzed includes: and performing resolution reduction processing and graying processing on the optical tissue image, and performing filtering processing on the image subjected to the graying processing to obtain a preprocessed image of the object to be analyzed.
Preferably, the adaptive binarization processing on the preprocessed image based on the threshold value matched with each pixel comprises: acquiring a pixel mean value of a neighborhood corresponding to each pixel based on a preset neighborhood of each pixel; and performing self-adaptive binarization processing on the preprocessed image based on a preset threshold offset and a pixel mean value corresponding to each pixel.
Preferably, the determining the boundary contour of the binarized image includes: performing topology analysis on a binary image corresponding to the preprocessed image, and scanning the result line of the topology analysis to obtain a boundary contour in the binary image; the determining the main body contour based on the inter-contour involvement relation of the boundary contour comprises the following steps: screening out a target contour by judging whether the boundary contour has a parent contour and/or a child contour; and screening the main body contour in the binarized image from the target contour based on the area coefficient corresponding to the boundary contour and a preset retention coefficient.
Preferably, the determining whether the boundary contour has a parent contour and/or a child contour, and screening out a target contour includes: judging whether the boundary contour has a father contour or not aiming at each boundary contour, and if so, keeping the boundary contour as a target contour; otherwise, judging whether the boundary contour has a sub-contour; if the sub-contour exists, the boundary contour is reserved as a target contour, otherwise, the boundary contour is eliminated.
Preferably, the segmenting the subject image from the preprocessed image according to the subject contour includes: according to the main body outline, determining an image area surrounded by the main body outline in the preprocessed image, and obtaining the main body image by reserving the image area; and carrying out resolution magnification processing on the subject image to enable the resolution of the subject image to be consistent with that of the optical tissue image.
Preferably, the adaptively clustering the subject image based on the color space includes: converting pixels of the subject image into an HSV nonlinear spatial representation; inputting pixel samples of the subject image represented in the HSV nonlinear space into a target clustering model for clustering to obtain a color quantization result of the subject image, wherein the similarity between the pixel samples is measured by using the Mahalanobis distance in the process of clustering through the target clustering model.
Preferably, the clustering the subject image in a pixel sample represented by HSV nonlinear space by inputting the pixel sample into a target clustering model includes: step A: determining a pixel point with the maximum density of main component pixels as an initial clustering center; and B: obtaining an effective value corresponding to the initial clustering center, and determining a pixel point farthest from the initial clustering center; step C: clustering the subject image by using the initial clustering center to obtain an effective value of a pixel point farthest from the initial clustering center, if the effective value of the pixel point is greater than the effective value corresponding to the initial clustering center, determining the pixel point as a next clustering center, and if not, finishing the clustering process; step D: and D, determining a pixel point farthest from the last clustering center, clustering the main body image by using the last clustering center, obtaining an effective value of the pixel point farthest from the last clustering center until the effective value of the pixel point is smaller than or equal to the effective value corresponding to the last clustering center, otherwise, determining the pixel point as the next clustering center, and repeatedly executing the step D.
In a second aspect, the present invention provides an image segmentation processing apparatus according to an embodiment of the present invention, including: the image preprocessing unit is used for preprocessing the acquired optical tissue image of the object to be analyzed to obtain a preprocessed image of the object to be analyzed; the image binarization unit is used for carrying out self-adaptive binarization processing on the preprocessed image based on the threshold value matched with each pixel to obtain a binarization image corresponding to the preprocessed image; the image segmentation unit is used for determining the boundary contour of the binary image, determining a main body contour based on the inter-contour linkage relation of the boundary contour, and segmenting a main body image from the preprocessed image according to the main body contour; and the image clustering unit is used for carrying out self-adaptive clustering processing on the main body image based on the color space to obtain a color quantization result of the main body image.
In a third aspect, the present invention provides an electronic device according to an embodiment of the present invention, including: a memory, a processor and code stored on the memory and executable on the processor, the processor implementing any of the embodiments of the first aspect when executing the code.
One or more technical solutions provided in the embodiments of the present invention have at least the following technical effects or advantages:
the method comprises the steps of preprocessing an acquired optical tissue image of an object to be analyzed to obtain a preprocessed image of the object to be analyzed; then, carrying out self-adaptive binarization processing on each pixel based on the threshold value matched with the pixel to obtain a binarization image corresponding to the preprocessed image; determining a boundary contour of a binary image, determining a main body contour based on a relationship between contours of the boundary contour, segmenting a main body image from a preprocessed image according to the main body contour, and finally, carrying out self-adaptive clustering processing on the main body image based on a color space to obtain a color quantization result of the main body image; according to the technical scheme, the main body outline in the binary image is determined based on the relation of involvement between the boundary outlines, the edge of the image is segmented based on the main body outline, and on the basis, the image area surrounded by the main body outline is subjected to self-adaptive clustering processing by using the color space, so that the situation that the air holes are mistakenly divided into the air hole walls is effectively reduced, further, the optical tissue image and the non-optical tissue image of the object to be analyzed can be more accurately segmented, and the over-segmentation and under-segmentation situations of the image are reduced.
The image segmentation processing method and the image segmentation processing device can reduce over-segmentation and under-segmentation conditions of the image, and can more accurately segment the optical tissue image of the object to be analyzed, thereby laying a foundation for subsequent tissue identification.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a flowchart of an image segmentation processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of the determination of the body contour based on the inter-contour involvement of the boundary contour in FIG. 1;
FIG. 3 is a diagram illustrating an exemplary architecture of an image segmentation processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a structure of an image segmentation processing apparatus according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides an image segmentation processing method and device, which are used for solving the problems of over-segmentation and under-segmentation of an image caused by low accuracy when an optical tissue image and a non-optical tissue image of an object to be analyzed are segmented in the related technology.
In order to solve the technical problems, the embodiment of the invention provides the following general ideas:
firstly, preprocessing an acquired optical tissue image of an object to be analyzed to obtain a preprocessed image of the object to be analyzed; then, carrying out self-adaptive binarization processing on each pixel based on the threshold value matched with the pixel to obtain a binarization image corresponding to the preprocessed image; determining a boundary contour of a binary image, determining a main body contour based on a relationship between the boundary contours, segmenting a main body image from a preprocessed image according to the main body contour, and finally, carrying out self-adaptive clustering processing on the main body image based on a color space to obtain a color quantization result of the main body component image; according to the technical scheme, the main body outline in the binary image is determined based on the inter-outline linkage relation of the boundary outline, the edge of the image is segmented based on the main body outline, and on the basis, the image area surrounded by the main body outline is subjected to self-adaptive clustering processing by using the color space, so that the situation that the air holes are mistakenly divided into the air hole walls is effectively reduced, further, the optical tissue image and the non-optical tissue image of the object to be analyzed can be more accurately segmented, and the over-segmentation and under-segmentation situations of the image are reduced.
The image segmentation processing method and the image segmentation processing device can reduce the over-segmentation and under-segmentation conditions of the image, can more accurately segment the optical tissue image of the object to be analyzed, and lay a foundation for subsequent tissue identification.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
First, it is stated that the term "and/or" appearing herein is merely one type of associative relationship that describes an associated object, meaning that three types of relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
In a first aspect, the present invention provides an image segmentation processing method according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S101: and preprocessing the acquired optical tissue image of the object to be analyzed to obtain a preprocessed image of the object to be analyzed.
It should be noted that the object to be analyzed has an anisotropic structure, and the anisotropy refers to a characteristic that physical and chemical properties of the object change with different directions, that is, the measured performance values of a certain object in different directions are different, and the object having the anisotropic structure includes: coke, wood, fiber board, carbon fiber composite materials, and the like, and thus, the optical tissue image of the object to be analyzed is an optical image of various tissues having an anisotropic structure.
Specifically, resolution reduction processing and graying processing are performed on the optical tissue image, and filtering processing is performed on the image subjected to the graying processing to obtain a preprocessed image of the object to be analyzed.
In the specific implementation process, to the optical tissue image who obtains the object of being analyzed, it is specific, can be through carrying out the section with the object of being analyzed, obtain the smooth piece test piece of object of being analyzed, then will soak oily on the smooth piece test piece that obtains, so that observe the optical tissue of object of being analyzed, utilize the microscope to carry out the enlarged imaging with smooth piece test piece, rethread high-resolution camera obtains microscopical field of view, obtain the optical tissue image of object of being analyzed, wherein, the microscope can be polarizing microscope, microscopical magnification can be 400 ~ 600 times, it is to be noted: the optical tissue image comprises a tissue image of an object to be analyzed, and in addition, the section of the object to be analyzed can be directly amplified through a high-performance image measuring instrument, and a viewing area of the image measuring instrument is directly intercepted, so that the optical tissue image of the object to be analyzed is obtained.
In the specific implementation process, due to the large size of the optical tissue image, if the main body contour of the optical tissue image is directly extracted, a poor tissue contour may be obtained, in order to extract a better tissue contour, the tissue of the object to be analyzed needs to be more concentrated in the optical tissue image, according to the idea of multi-resolution, the width and the height of the optical tissue image are reduced in an equal proportion, after the resolution reduction processing is performed on the optical tissue image, the optical tissue image is grayed, and then the grayed optical tissue image is subjected to gaussian filtering to obtain a preprocessed image of the object to be analyzed. The high aspect ratio of the optical tissue image may be reduced to half of the original size, and correspondingly, the template of the gaussian filter may be 3 × 3.
Step S102: and performing self-adaptive binarization processing on the preprocessed image based on the threshold value matched with each pixel to obtain a binarized image corresponding to the preprocessed image.
Specifically, the method includes the steps of firstly obtaining a pixel mean value of a neighborhood corresponding to each pixel based on a preset neighborhood of each pixel, and then carrying out self-adaptive binarization processing on a preprocessed image based on a preset threshold offset and the pixel mean value corresponding to each pixel.
In a specific implementation process, the threshold value of each pixel matching can be obtained by the following formula:
Figure BDA0003001717030000071
wherein F (i, j) is the threshold value matched with the pixel point P (i, j),
Figure BDA0003001717030000072
the average value of neighborhood pixels around the pixel point P (i, j) is C, and the offset of the threshold is preset;
in a specific implementation process, the average value of neighborhood pixels around the pixel point P (i, j) may be an arithmetic average value of a local neighborhood block, or a weighted average algorithm of the local neighborhood block may be used to obtain a weighted average value of pixel gray levels in the pixel neighborhood block as the average value of the neighborhood pixels;
for the weighted average algorithm of the local neighborhood block, weighting may be performed in a local adaptive window of 251 × 251, for example, the average value of the neighborhood pixels may be obtained by using the following formula:
Figure BDA0003001717030000073
where Q (i, j, M, N) is neighborhood AijA weighted value corresponding to the pixel (M, N), P (i, j) being the central pixel value,
Figure BDA0003001717030000074
the average value of the neighborhood pixels of the pixel point P (i, j);
further, in the above formula for obtaining the average value of the neighboring pixels, the weighted value of the pixel may be obtained by the following formula:
Figure BDA0003001717030000075
wherein A isijIs the neighborhood of the pixels in the preprocessed image F (i, j), σ is the hyperparameter;
σ is used to control the local extent of the local weighting, the smaller σ, the fewer samples used in fitting the samples, and the greater the weight change, but σ should not be too small, overfitting can occur, and in addition, the neighborhood a of pixels in the initial image F (i, j)ijMay be 7 × 7, 4 × 4 or other values, and is not limited herein.
Due to the fact that pixel value change difference between pixels of the optical tissue image of the object to be analyzed is large, noise points which are dense exist in the tissue pixel area of the object to be analyzed in the image obtained through the self-adaptive binarization processing, noise point suppression can be conducted on the image after binarization processing through morphological filtering, wherein the size of a morphological filtering kernel can be 3 x 3, and the filtering frequency can be 9-12 times.
Step S103: determining the boundary contour of the binary image, determining a main body contour based on the inter-contour involvement relation of the boundary contour, and segmenting the main body image from the preprocessed image according to the main body contour.
Specifically, topology analysis is performed on the binarized image corresponding to the preprocessed image, and the result of the topology analysis is scanned to obtain the boundary contour in the binarized image.
In the specific implementation process, after a binary image corresponding to a preprocessed image is obtained, the contour of the binary image is formed by continuous white pixel points in the neighborhood of all black pixel points, topology analysis is carried out on the binary image, the result of the topology analysis is scanned, and then the hierarchical relationship of all boundary contours and the boundary contours is determinediEvery time a new boundary is scanned, i is added with 1 so as to obtain all boundary contours in the binary image, and the coordinates of each pixel of the contour boundary can be recorded, and the set is MiBy way of example, the coordinates of each pixel of the contour boundary are represented by the following set:
Mi={(pi1,qi1),(pi2,qi2),(pi3,qi3),……(pin,qin)}
wherein (p)i1,qi1) Coordinates representing the first line scan of the ith boundary pixel, (p)in,qin) Coordinates representing the nth line scan of the ith boundary pixel;
by highlighting the coordinates of these pixels in the optical tissue image, a binarized image is obtained which includes all the boundary contours.
Based on the characteristics of the binary image including the boundary contour, all contours in the optical tissue can be divided into three major categories, i.e., the maximum profile, the profile inside the maximum profile, and the contour on the same level as the maximum profile, since after removing the profiles on the same level as the maximum profile, there are many fine profiles inside the main body profile, the fine contours are all sub-contours of the maximum contour, and can be known by combining the characteristics of the optical tissue image, some of these sub-profiles are air hole profiles and some are fine profiles between pixel gaps of the main component, and in order to distinguish the air hole profiles from the fine profiles between the pixel gaps of the main component, and all pixels in the main body component are kept as much as possible, the fine contour between the pixel gaps of the main body component needs to be removed, and the visual and large air hole contour is kept, so that the main body contour is obtained.
Further, the main body contour may be determined based on a relationship between contours of the boundary contour, as shown in fig. 2, specifically, a target contour is screened out by judging whether the boundary contour has a parent contour and/or a child contour, and the main body contour in the binarized image is screened out from the target contour based on an area coefficient corresponding to the boundary contour and a preset retention coefficient.
In a specific implementation process, the area coefficient corresponding to the boundary contour may be obtained by using the following formula:
Figure BDA0003001717030000091
wherein S isiIs the area of the ith boundary contour, SmaxIs the envelope area of the maximum contour, beta ∈ (0, 1)];
By setting a suitable preset retention coefficient, the preset retention coefficient can be recorded as eta, if eta is less than or equal to betaiIf not, the contour is eliminated. The predetermined retention factor may be 1% to 5%.
For determining whether the boundary contour has a parent contour and/or a child contour, specifically, for each boundary contour, determining whether the boundary contour has a parent contour, and if so, keeping the boundary contour as a target contour; otherwise, judging whether the boundary contour has a sub-contour; if the sub-contour exists, the boundary contour is reserved as a target contour, otherwise, the boundary contour is eliminated.
For segmenting the subject image from the preprocessed image according to the subject contour, specifically, an image region surrounded by the subject contour may be determined in the preprocessed image according to the subject contour, the subject image may be obtained by reserving the image region, and then the resolution of the segmented subject image may be enlarged to make the resolution of the segmented subject image consistent with the original optical tissue image.
Step S104: and carrying out self-adaptive clustering processing on the main image based on the color space to obtain a color quantization result of the main image.
Specifically, firstly, the pixels of the subject image may be converted into HSV nonlinear space representation, and then the pixel samples of the subject image represented in the HSV nonlinear space are input into the target clustering model for clustering to obtain a color quantization result of the subject image, wherein, in the process of clustering by the target clustering model, the similarity between the pixel samples is measured by using mahalanobis distance.
In the specific implementation process, in the RGB space, because the relevance between the three color channels is too strong, when the subject image is clustered, it is difficult to infer three component values which are relatively accurate, and thus the three component values are not suitable for being used as the input of the target clustering model, and in the HSV color space, the HSV color space can more intuitively express the hue, the brightness and the brightness of the color than the RGB color space, and the color comparison is convenient, so that the pixels of the subject image can be converted into HSV nonlinear space to express, and then the subject image is input into the target clustering model for clustering in the pixel sample expressed by the HSV nonlinear space.
It should be noted that, in the conventional clustering algorithm, euclidean distance is generally used as a distance measure method between clustered pixels, and it is assumed that X is { X ═ X in a data seti1,2,3,4, … … n, where each sample in X is represented by e attribute-describing a1, a2.. Ae, and for pixels in HSV color space, there are only three a1, a2, A3, i.e., three channels of data corresponding to H, S V for each pixel in HSV color space, and the pixel sample X is Xi=(xih,xis,xiv) And pixel sample xj=(xjh,xjs,xjv) The similarity between the pixel samples x is generally expressed by the following formulaiAnd pixel sample xjEuclidean distance therebetween:
Figure BDA0003001717030000101
wherein d (x)i,xj) Is a pixel sample xiAnd pixel sample xjThe euclidean distance between them,
from the above representative pixel sample xiAnd pixel sample xjThe formula of the euclidean distance between them can be known as: the smaller the Euclidean distance, the smaller the pixel sample xiAnd pixel sample xjThe greater the similarity between, the pixel sample xiAnd pixel sample xjThe difference between the two is smaller, otherwise, the similarity is smaller, and the difference is larger; however, the euclidean distance cannot distinguish the difference between different attributes of the pixel sample, and also cannot include the influence of the overall change and difference of the sample on the distance, so for HSV color space, the mahalanobis distance is usually used to measure the distance between the clustered pixels, and the following formula is used to represent the pixel sample xiAnd pixel sample xjMahalanobis distance between:
Figure BDA0003001717030000111
wherein d is*(xi,xj) Is the Mahalanobis distance of pixel sample points i to j, where xi=(xih,xis,xiv) Is the pixel sample point xiCorresponding pixel value, x, in HSV spacej=(xjh,xjs,xjv) Is the pixel sample point xjCorresponding pixel values in HSV space, M is the covariance matrix of the sample to be measured, and when i ═ j, the Mahalanobis distance d*(xi,xj) The following conditions are satisfied:
Figure BDA0003001717030000112
in the specific implementation process, inputting pixel samples of the subject image represented by HSV nonlinear space into a target clustering model for clustering, wherein the clustering process comprises the following steps: step A: determining a pixel point with the maximum density of main component pixels as an initial clustering center; and B: obtaining an effective value corresponding to an initial clustering center, and determining a pixel point farthest from the initial clustering center; and C: clustering the main body image by using the initial clustering center to obtain an effective value of a pixel point farthest from the initial clustering center, if the effective value of the pixel point is greater than the effective value corresponding to the initial clustering center, determining the pixel point as a next clustering center, and if not, finishing clustering; step D: and D, determining a pixel point farthest from the last clustering center, clustering the main body image by using the last clustering center, obtaining an effective value of the pixel point farthest from the last clustering center until the effective value of the pixel point is smaller than or equal to the effective value corresponding to the last clustering center, otherwise, determining the pixel point as the next clustering center, and repeatedly executing the step D.
For example, the target clustering model may be a K-Means clustering algorithm, or may be another clustering algorithm, such as a K-Means clustering algorithm, a K-models clustering algorithm, or a K-Means clustering algorithm, and since the main tissue components in each subject image are not large, a better clustering processing effect can be achieved by reducing the number of iterations, the upper limit of the K value can be set to 5 in the process of finding the best clustering K value based on color clustering, and the best clustering K value can be determined when the clustering effectiveness is maximum by evaluating the inter-class variance and the intra-class variance of clustering, specifically, the clustering effectiveness can be obtained by using the following formula:
Figure BDA0003001717030000121
where k is the number of clusters and N represents the trainingNumber of samples of set, SSEIs the between-class variance, SSMIs the intra-class variance;
for the inter-class variance, it can be obtained using the following formula:
Figure BDA0003001717030000122
wherein, BkIs a covariance matrix between classes, tr (B)k) Is a trace of the inter-class covariance matrix and only the elements on the diagonal of the inter-class covariance matrix are considered, i.e. all data points in class i to class centroid ciEuclidean distance of cFAll data points;
for the intra-class variance, it can be obtained using the following formula:
Figure BDA0003001717030000123
wherein M iskIs a covariance matrix within the class, tr (M)k) Is a trace of an inter-class covariance matrix, xiSet of all sample points in class i, ciAre particles of class i.
Thus, SCHThe score can measure the difference between the classification and the ideal classification (largest inter-class variance, smallest intra-class variance) when S isCHWhen the maximum value is obtained, the sub-optimal cluster K value is also at SCHObtaining the score, wherein in the clustering process, the initial clustering center can be a pixel point with the maximum density of main component pixels, then in the main component pixels, a pixel point farthest from the initial clustering center is pre-selected in advance to be used as a pseudo-clustering center of the next clustering, and S at the moment is calculated when each pseudo-clustering center is obtainedCHScore, and at the same time, will cluster the S of the centerCHS with last cluster centerCHThe scores are compared if S of the pseudo-cluster center isCHScore higher than last cluster center SCHAnd the score is calculated, the quasi-clustering center is determined as a new clustering center, and so on until the clustering frequency reaches the best,otherwise, eliminating the quasi-clustering center and finishing the clustering process. By the scheme, the randomness of selecting the clustering centers is avoided, the possibility of the same clustering centers is reduced, and the condition of empty clusters in the clustering process is further reduced.
And finally, carrying out self-adaptive clustering processing on the main image based on the color space to obtain a color quantization result of the main image, wherein the color quantization result comprises a component image.
According to the technical scheme, the over-segmentation and under-segmentation conditions of the image are reduced in the process of obtaining the main image, so that the optical tissue image of the object to be analyzed can be more accurately segmented, the color quantization result of the main image is further obtained, on the basis, the color quantization result of the main image is further classified and identified, the more accurate classification and identification result of the object to be analyzed can be obtained, and the accuracy of classification and identification of the optical tissue of the object with the anisotropic structure is improved.
In a second aspect, the present invention provides an image segmentation processing apparatus according to an embodiment of the present invention, including:
the image preprocessing unit 301 is configured to preprocess the acquired optical tissue image of the object to be analyzed to obtain a preprocessed image of the object to be analyzed;
an image binarization unit 302, configured to perform adaptive binarization processing on the preprocessed image based on a threshold value matched for each pixel to obtain a binarized image corresponding to the preprocessed image;
an image segmentation unit 303, configured to determine a boundary contour of the binarized image, determine a main body contour based on a relationship between contours of the boundary contour, and segment a main body image from the preprocessed image according to the main body contour;
and the image clustering unit 304 is configured to perform adaptive clustering processing on the subject image based on the color space to obtain a color quantization result of the subject image.
In an optional implementation manner, the image preprocessing unit 301 is specifically configured to:
and performing resolution reduction processing and graying processing on the optical tissue image, and performing filtering processing on the image subjected to the graying processing to obtain a preprocessed image of the object to be analyzed.
In an optional implementation manner, the image binarization unit 302 includes:
the neighborhood mean value obtaining subunit is used for obtaining the pixel mean value of the neighborhood corresponding to each pixel based on the preset neighborhood of each pixel;
and the binarization processing subunit is used for performing adaptive binarization processing on the preprocessed image based on a preset threshold offset and a pixel mean value corresponding to each pixel.
In an alternative embodiment, the image segmentation unit 303 includes:
a boundary contour determining subunit, configured to perform topology analysis on the binarized image corresponding to the preprocessed image, and perform line scanning on a result of the topology analysis to obtain a boundary contour in the binarized image;
the target contour determining subunit is used for screening out the target contour by judging whether the boundary contour has a parent contour and/or a child contour;
and the main body contour determining subunit is used for screening the main body contour in the binary image from the target contour based on the area coefficient corresponding to the boundary contour and the preset retention coefficient.
In an alternative embodiment, the target profile determination subunit is specifically configured to:
judging whether the boundary contour has a father contour or not for each boundary contour, and if so, reserving the boundary contour as a target contour; otherwise, judging whether the boundary contour has a sub-contour; if the sub-contour exists, the boundary contour is reserved as a target contour, otherwise, the boundary contour is eliminated.
In an optional implementation manner, the image segmentation unit 303 further includes:
the main image segmentation subunit is used for determining an image area surrounded by the main body outline in the preprocessed image according to the main body outline and obtaining a main image by reserving the image area;
and the main body image amplification processing subunit is used for carrying out resolution amplification processing on the main body image so as to enable the resolution of the main body image to be consistent with the optical tissue image.
In an optional implementation manner, the image clustering unit 304 includes:
a color space conversion subunit for converting pixels of the subject image into HSV non-linear spatial representation;
and the clustering processing subunit is used for inputting pixel samples of the main image represented in the HSV nonlinear space into the target clustering model for clustering to obtain a color quantization result of the main image, wherein the similarity between the pixel samples is measured by using the Mahalanobis distance in the clustering process through the target clustering model.
In an optional implementation manner, the clustering processing subunit is specifically configured to:
inputting pixel samples of a subject image represented by an HSV (hue, saturation, value) nonlinear space into a target clustering model for clustering, wherein the clustering process comprises the following steps: step A: determining a pixel point with the maximum density of the main component pixel as an initial clustering center; and B: obtaining an effective value corresponding to an initial clustering center, and determining a pixel point farthest from the initial clustering center; and C: clustering the main image by using the initial clustering center to obtain an effective value of a pixel point farthest from the initial clustering center, if the effective value of the pixel point is greater than the effective value corresponding to the initial clustering center, determining the pixel point as a next clustering center, and if not, finishing clustering; step D: and D, determining a pixel point farthest from the last clustering center, clustering the main body image by using the last clustering center, obtaining an effective value of the pixel point farthest from the last clustering center until the effective value of the pixel point is smaller than or equal to the effective value corresponding to the last clustering center, otherwise, determining the pixel point as the next clustering center, and repeatedly executing the step D.
In a third aspect, based on the same inventive concept, an embodiment of the present invention provides an image segmentation processing apparatus.
Referring to fig. 4, an image segmentation processing apparatus according to an embodiment of the present invention includes: a memory 401, a processor 402 and code stored in the memory and executable on the processor 402, the processor 402 implementing any of the embodiments of the first aspect of the foregoing image segmentation processing method when executing the code.
Where in fig. 4 a bus architecture (represented by bus 400), bus 400 may include any number of interconnected buses and bridges, bus 400 linking together various circuits including one or more processors, represented by processor 402, and memory, represented by memory 401. The bus 400 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 406 provides an interface between the bus 400 and the receiver 403 and transmitter 404. The receiver 403 and the transmitter 404 may be the same element, i.e., a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 402 is responsible for managing the bus 400 and general processing, and the memory 401 may be used for storing data used by the processor 402 in performing operations.
The technical scheme in the embodiment of the invention at least has the following technical effects or advantages:
1. the image segmentation processing method and the image segmentation processing device can reduce the over-segmentation and under-segmentation conditions of the image, can more accurately segment the optical tissue image of the object to be analyzed and the non-optical tissue image, and lay a foundation for subsequent tissue identification.
2. The image segmentation processing method disclosed by the invention can screen out the target contour by judging whether the boundary contour has a parent contour and/or a child contour, and screen out the main body contour in the binary image from the target contour based on the area coefficient corresponding to the boundary contour and the preset retention coefficient, so that the air hole contour and the fine contour between the pixel gaps of the main body component can be distinguished, all pixels in the main body component are retained as far as possible, the fine contour between the pixel gaps of the main body component is removed, and the visual and large air hole contour is retained.
3. The image segmentation processing method disclosed by the invention is characterized in that the pixels of the main image are converted into HSV nonlinear space representation, then the pixel samples of the main image represented in the HSV nonlinear space are input into the target clustering model for clustering, in the HSV color space, compared with the RGB color space, the HSV color space can more intuitively express the hue, the brightness and the brightness of the color, the color comparison is convenient, and the situation that the accurate three component values are difficult to be inferred when the main image is clustered due to the strong relevance among three color channels in the RGB color space is avoided.
4. The smaller the Euclidean distance is, the greater the similarity between the pixel samples is, the smaller the difference between the pixel samples is, otherwise, the smaller the similarity is, the greater the difference is, the Euclidean distance cannot distinguish the difference between different attributes of the pixel samples, and the influence of the overall change and difference of the samples on the distance is not included.
5. Because the traditional clustering centers are often randomly selected, if the same clustering centers appear, empty clusters can appear in the result, and in the image segmentation processing method disclosed by the invention, in the clustering process, the initial clustering center can be a pixel point with the maximum density of main component pixels, then in the main component pixels, a pixel point farthest from the initial clustering center is pre-selected in advance to be used as a pseudo-clustering center for the next clustering, and each time one pixel is obtainedCalculating S at the time of simulating a cluster centerCHScore, and at the same time, will cluster the S of the centerCHS from the last cluster centerCHThe scores are compared if S of the pseudo-cluster center isCHScore higher than last cluster center SCHAnd the score is divided, the quasi-clustering center is determined as a new clustering center, and the rest is done until the clustering frequency reaches the best, otherwise, the quasi-clustering center is removed, the clustering process is ended,
as will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the invention may take the form of a computer product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer instructions. These computer instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. An image segmentation processing method, comprising:
preprocessing the acquired optical tissue image of the object to be analyzed to obtain a preprocessed image of the object to be analyzed;
performing self-adaptive binarization processing on the preprocessed image based on the threshold value matched with each pixel to obtain a binarization image corresponding to the preprocessed image;
determining the boundary contour of the binarized image, including: performing topology analysis on a binary image corresponding to the preprocessed image, and scanning the result line of the topology analysis to obtain a boundary contour in the binary image; determining a main body outline based on the inter-outline involvement relation of the boundary outline, and segmenting a main body component image from the preprocessed image according to the main body outline; the determining the main body contour based on the inter-contour involvement relation of the boundary contour comprises the following steps: screening out a target contour by judging whether the boundary contour has a parent contour and/or a child contour; screening a main body contour in the binarized image from the target contour based on an area coefficient corresponding to the boundary contour and a preset retention coefficient;
the judging whether the boundary contour has a parent contour and/or a child contour or not and screening out a target contour comprises the following steps: judging whether the boundary contour has a father contour or not aiming at each boundary contour, and if so, keeping the boundary contour as a target contour; otherwise, judging whether the boundary contour has a sub-contour; if the sub-contour exists, the boundary contour is reserved as a target contour, otherwise, the boundary contour is eliminated; wherein a sub-contour of the boundary contour is inside the boundary contour and a parent contour of the boundary contour is outside the boundary contour;
and carrying out self-adaptive clustering processing on the main component image based on the color space to obtain a color quantization result of the main component image.
2. The method of claim 1, wherein the pre-processing of the acquired optical tissue image of the object to be analyzed comprises:
and performing resolution reduction processing and graying processing on the optical tissue image, and performing filtering processing on the image subjected to the graying processing to obtain a preprocessed image of the object to be analyzed.
3. The method of claim 1, wherein said adaptively binarizing the pre-processed image based on the threshold value for each pixel match comprises:
acquiring a pixel mean value of a neighborhood corresponding to each pixel based on a preset neighborhood of each pixel;
and performing self-adaptive binarization processing on the preprocessed image based on a preset threshold offset and a pixel mean value corresponding to each pixel.
4. The method of claim 1, wherein said segmenting a subject composition image from said pre-processed image according to said subject contour comprises:
according to the main body outline, determining an image area surrounded by the main body outline in the preprocessed image, and obtaining the main body component image by reserving the image area;
and carrying out resolution magnification processing on the main component image to enable the resolution of the main component image to be consistent with that of the optical tissue image.
5. The method of claim 1, wherein the adaptively clustering the subject component images based on color space comprises:
converting pixels of the subject component image to an HSV nonlinear spatial representation;
inputting the pixel samples of the main component image represented in the HSV nonlinear space into a target clustering model for clustering to obtain a color quantization result of the main component image, wherein the similarity between the pixel samples is measured by using the Mahalanobis distance in the clustering process through the target clustering model.
6. The method of claim 5, wherein said clustering said body component images in a pixel sample input target clustering model of an HSV non-linear spatial representation comprises:
step A: determining a pixel point with the maximum density of the main component pixel as an initial clustering center;
and B: obtaining an effective value corresponding to the initial clustering center, and determining a pixel point farthest from the initial clustering center;
and C: clustering the main component images by using the initial clustering centers to obtain effective values of pixel points which are farthest away from the initial clustering centers, if the effective values of the pixel points are larger than the effective values corresponding to the initial clustering centers, determining the pixel point as a next clustering center, and if not, finishing the clustering;
step D: and D, determining a pixel point farthest from the last clustering center, clustering the main component image by using the last clustering center, obtaining an effective value of the pixel point farthest from the last clustering center until the effective value of the pixel point is smaller than or equal to the effective value corresponding to the last clustering center, otherwise, determining the pixel point as the next clustering center, and repeatedly executing the step D.
7. An image segmentation processing apparatus, comprising:
the image preprocessing unit is used for preprocessing the acquired optical tissue image of the object to be analyzed to obtain a preprocessed image of the object to be analyzed;
the image binarization unit is used for carrying out self-adaptive binarization processing on the preprocessed image based on the threshold value matched with each pixel to obtain a binarization image corresponding to the preprocessed image;
the image segmentation unit is used for determining the boundary contour of the binary image and comprises the following steps: performing topology analysis on a binary image corresponding to the preprocessed image, and scanning the result line of the topology analysis to obtain a boundary contour in the binary image; determining a main body contour based on the inter-contour involvement relation of the boundary contour, and segmenting a main body component image from the preprocessed image according to the main body contour; the determining the main body contour based on the inter-contour involvement relation of the boundary contour comprises the following steps: screening out a target contour by judging whether the boundary contour has a parent contour and/or a child contour; screening out a main body contour in the binary image from the target contour based on an area coefficient corresponding to the boundary contour and a preset retention coefficient;
wherein, the judging whether the boundary contour has a parent contour and/or a child contour, and screening out the target contour comprises: judging whether the boundary contour has a father contour or not for each boundary contour, and if so, reserving the boundary contour as a target contour; otherwise, judging whether the boundary contour has a sub-contour; if the sub-contour exists, the boundary contour is reserved as a target contour, otherwise, the boundary contour is eliminated;
and the image clustering unit is used for carrying out self-adaptive clustering processing on the main component image based on the color space to obtain a color quantization result of the main component image.
8. An electronic device, comprising: memory, processor and code stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1-6 when executing the code.
CN202110349479.XA 2021-03-31 2021-03-31 Image segmentation processing method and device Active CN113139936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110349479.XA CN113139936B (en) 2021-03-31 2021-03-31 Image segmentation processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110349479.XA CN113139936B (en) 2021-03-31 2021-03-31 Image segmentation processing method and device

Publications (2)

Publication Number Publication Date
CN113139936A CN113139936A (en) 2021-07-20
CN113139936B true CN113139936B (en) 2022-07-08

Family

ID=76810233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110349479.XA Active CN113139936B (en) 2021-03-31 2021-03-31 Image segmentation processing method and device

Country Status (1)

Country Link
CN (1) CN113139936B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499136A (en) * 2009-03-05 2009-08-05 西安电子科技大学 Image over-segmenting optimization method based on multi-target evolution clustering and spatial information
CN101567084A (en) * 2009-06-05 2009-10-28 西安电子科技大学 Method for detecting picture contour based on combination of level set and watershed
CN104851085A (en) * 2014-02-17 2015-08-19 征图新视(江苏)科技有限公司 Method and system automatically obtaining detection zone in image
CN105335685A (en) * 2014-07-22 2016-02-17 北大方正集团有限公司 Image identification method and apparatus
US9530199B1 (en) * 2015-07-13 2016-12-27 Applied Materials Israel Ltd Technique for measuring overlay between layers of a multilayer structure
CN108305268A (en) * 2018-01-03 2018-07-20 沈阳东软医疗系统有限公司 A kind of image partition method and device
CN109523566A (en) * 2018-09-18 2019-03-26 姜枫 A kind of automatic division method of Sandstone Slice micro-image
CN109993758A (en) * 2019-04-23 2019-07-09 北京华力兴科技发展有限责任公司 Dividing method, segmenting device, computer equipment and storage medium
CN110020657A (en) * 2019-01-15 2019-07-16 浙江工业大学 A kind of bitmap silhouettes coordinate extraction method of cutting
CN111798472A (en) * 2020-07-13 2020-10-20 中国计量大学 End cocoon segmentation and identification method based on HSI space

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499136A (en) * 2009-03-05 2009-08-05 西安电子科技大学 Image over-segmenting optimization method based on multi-target evolution clustering and spatial information
CN101567084A (en) * 2009-06-05 2009-10-28 西安电子科技大学 Method for detecting picture contour based on combination of level set and watershed
CN104851085A (en) * 2014-02-17 2015-08-19 征图新视(江苏)科技有限公司 Method and system automatically obtaining detection zone in image
CN105335685A (en) * 2014-07-22 2016-02-17 北大方正集团有限公司 Image identification method and apparatus
US9530199B1 (en) * 2015-07-13 2016-12-27 Applied Materials Israel Ltd Technique for measuring overlay between layers of a multilayer structure
CN108305268A (en) * 2018-01-03 2018-07-20 沈阳东软医疗系统有限公司 A kind of image partition method and device
CN109523566A (en) * 2018-09-18 2019-03-26 姜枫 A kind of automatic division method of Sandstone Slice micro-image
CN110020657A (en) * 2019-01-15 2019-07-16 浙江工业大学 A kind of bitmap silhouettes coordinate extraction method of cutting
CN109993758A (en) * 2019-04-23 2019-07-09 北京华力兴科技发展有限责任公司 Dividing method, segmenting device, computer equipment and storage medium
CN111798472A (en) * 2020-07-13 2020-10-20 中国计量大学 End cocoon segmentation and identification method based on HSI space

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Active contours with selectsegmentation property for multiobject imageive local or global;Weibin Li 等;《Optical Engineering》;20110603;第50卷(第6期);1-6 *
图像分析在焦炭气孔结构参数测定中的应用;任世彪 等;《安徽工业大学学报》;20030131;第20卷(第1期);66-81 *
焦炭显微光学组织自动识别关键技术研究;周芳;《中国优秀硕士学位论文全文数据库 信息科技辑》;20121015(第10期);I138-41 *
焦炭显微图像分割研究;毛雪芹;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120315(第03期);I138-2316 *

Also Published As

Publication number Publication date
CN113139936A (en) 2021-07-20

Similar Documents

Publication Publication Date Title
CN110334706B (en) Image target identification method and device
CN115082683B (en) Injection molding defect detection method based on image processing
CN109154978B (en) System and method for detecting plant diseases
Poletti et al. A review of thresholding strategies applied to human chromosome segmentation
CN111738064B (en) Haze concentration identification method for haze image
Mitianoudis et al. Document image binarization using local features and Gaussian mixture modeling
CN110717896B (en) Plate strip steel surface defect detection method based on significance tag information propagation model
CN105844278B (en) A kind of fabric scan pattern recognition methods of multiple features fusion
Kumar et al. Review on image segmentation techniques
CN108510499B (en) Image threshold segmentation method and device based on fuzzy set and Otsu
US20080285856A1 (en) Method for Automatic Detection and Classification of Objects and Patterns in Low Resolution Environments
CN111507426B (en) Non-reference image quality grading evaluation method and device based on visual fusion characteristics
CN108537751B (en) Thyroid ultrasound image automatic segmentation method based on radial basis function neural network
CN109598681B (en) No-reference quality evaluation method for image after repairing of symmetrical Thangka
Nie et al. Two-dimensional extension of variance-based thresholding for image segmentation
De Automatic data extraction from 2D and 3D pie chart images
CN115620075B (en) Method, system and equipment for generating data set for leukocyte classification model
CN114170418A (en) Automobile wire harness connector multi-feature fusion image retrieval method by searching images through images
Kartika et al. Butterfly image classification using color quantization method on hsv color space and local binary pattern
CN110188693B (en) Improved complex environment vehicle feature extraction and parking discrimination method
TWI498830B (en) A method and system for license plate recognition under non-uniform illumination
Rotem et al. Combining region and edge cues for image segmentation in a probabilistic gaussian mixture framework
Gunawan et al. Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation
CN113139936B (en) Image segmentation processing method and device
Yahya et al. Image enhancement background for high damage Malay manuscripts using adaptive threshold binarization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant