CN115311276B - Intelligent segmentation method for ferrographic image based on machine vision - Google Patents
Intelligent segmentation method for ferrographic image based on machine vision Download PDFInfo
- Publication number
- CN115311276B CN115311276B CN202211241929.4A CN202211241929A CN115311276B CN 115311276 B CN115311276 B CN 115311276B CN 202211241929 A CN202211241929 A CN 202211241929A CN 115311276 B CN115311276 B CN 115311276B
- Authority
- CN
- China
- Prior art keywords
- color
- points
- initial
- category
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 42
- 238000000034 method Methods 0.000 title claims abstract description 34
- XEEYBQQBJWHFJM-UHFFFAOYSA-N Iron Chemical compound [Fe] XEEYBQQBJWHFJM-UHFFFAOYSA-N 0.000 claims description 22
- 239000000428 dust Substances 0.000 claims description 17
- 229910052742 iron Inorganic materials 0.000 claims description 11
- 238000001228 spectrum Methods 0.000 claims description 11
- 238000003064 k means clustering Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 10
- 238000003745 diagnosis Methods 0.000 abstract description 6
- 238000012544 monitoring process Methods 0.000 abstract description 6
- 239000003086 colorant Substances 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000005299 abrasion Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30136—Metal
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of machine vision, in particular to a ferrographic image intelligent segmentation method based on machine vision, which comprises the following steps: converting all pixel points in the ferrographic image into color points in a Lab color space; obtaining all initial categories of all color points according to the initial clustering center and the first color difference value; obtaining the appropriateness of each target color point in each initial category, obtaining the appropriateness of all color points in each initial category according to an appropriateness prediction formula, and further obtaining a new clustering center; obtaining the final categories of all color points through multiple iterations; and obtaining a segmentation result of the ferrographic image according to the final classification of all the color points. The method of the invention enables the segmentation result of the ferrographic image to better accord with the expected segmentation effect of human vision, and provides better basis for the wear condition monitoring and fault diagnosis of equipment.
Description
Technical Field
The invention relates to the field of machine vision, in particular to a ferrographic image intelligent segmentation method based on machine vision.
Background
The ferrography technology is a technology for analyzing the characteristics of the abrasive dust such as granularity, shape, color, texture and the like, and can be used for wear condition monitoring and fault diagnosis. If the equipment is worn, the main wear state of the current equipment can be analyzed and obtained through the common characteristic of the abrasive dust contained in the oil liquid of the equipment.
When the characteristics of the abrasive dust are extracted, the abrasive dust needs to be segmented, in the existing abrasive dust segmentation method, iron spectrum images of the abrasive dust are clustered based on color difference, certain areas in the iron spectrum images of the abrasive dust are segmented into a connected domain under the vision of human eyes, but under the existing abrasive dust segmentation method, the iron spectrum images of the abrasive dust can be segmented into different connected domains, so that the segmentation effect of the iron spectrum images of the abrasive dust is poor, and further the monitoring of abrasion working conditions and fault diagnosis are influenced.
Therefore, the scheme provides the intelligent iron spectrum image segmentation method based on machine vision, improves the segmentation effect of the iron spectrum image of the abrasive dust based on color segmentation, and provides a better basis for wear condition monitoring and fault diagnosis of equipment.
Disclosure of Invention
In order to solve the above problems, the present invention provides a ferrographic image intelligent segmentation method based on machine vision, the method comprising:
obtaining an iron spectrum image of abrasive dust in the oil sample through a microscopic module; converting all pixel points in the ferrographic image into color points in a Lab color space;
randomly selecting a first number of color points from all color points in the Lab color space, and recording the color points as initial clustering centers;
s1: obtaining all initial categories of all color points according to all initial cluster centers, including:
calculating all first color difference values of all initial clustering centers and all color points in the Lab color space; clustering all the color points by using a K-means clustering algorithm according to all the first color difference values and the initial clustering centers to obtain all initial categories of all the color points;
s2, obtaining all new clustering centers according to all the initial categories, including:
for any one initial class, randomly sampling a plurality of color points from all the color points of the initial class, and marking as target color points; calculating all second color difference values of all target color points and all other color points in the initial category, and obtaining the appropriate degree of the target color points according to all the second color difference values; calculating the appropriateness of each color point of the initial category according to the appropriateness of all target color points; marking the color point with the maximum suitable degree in all the color points of the initial category as a new clustering center; for all initial categories, obtaining all new clustering centers;
repeatedly executing S1 and S2 by taking the new clustering center as an initial clustering center until the initial clustering center is not changed any more, and taking the obtained initial category as a final category;
and clustering all pixel points in the ferrograph image according to all color points corresponding to the final category to obtain the segmentation result of all pixel points in the ferrograph image.
Further, the step of the first color difference value and the second color difference value comprises:
the latitude of the initial cluster center is obtained,judging whether the color point in the Lab color space is within the tolerance of the initial clustering center or not, if so, determining that the first color difference value is(ii) a Otherwise, the first color difference value is:
wherein,is the luminance value of the initial cluster center,is the brightness value of a color point in the Lab color space,is the color value of the initial cluster center,is the color value of a color point in the Lab color space,is a first hue difference value;
obtaining the tolerance of the target color point, judging whether other color points in the initial category are within the tolerance of the target color point, and if the other color points in the initial category are within the tolerance of the target color point, determining that the second color difference value is(ii) a Otherwise the second color difference value is:
wherein,for the luminance value of the target color point,the luminance values of the other color points in the initial class,is the color value of the target color point,color values for other color points in the initial category,is the second hue value.
Further, the step of clustering all the color points by using a K-means clustering algorithm according to all the first color difference values and the initial clustering centers to obtain all the initial categories of all the color points includes:
according to the first color difference value of the color point and each initial clustering center, based on the minimum color difference value principle of the K-means clustering algorithm, the color point is divided into the initial clustering centers corresponding to the minimum first color difference value, a set formed by all the color points corresponding to each initial clustering center is marked as an initial category, and all the initial categories of all the color points are obtained.
Further, the step of obtaining the appropriateness of the target color point according to all the second color difference values includes:
recording a sequence of all second color difference values of the target color point and all other color points in the initial category as a color difference value sequence of the target color point, and obtaining the appropriateness degree of the target color point according to the color difference value sequence of the target color point, wherein the calculation formula of the appropriateness degree of the target color point is as follows:
wherein,an exponential function with a natural constant as the base is represented,is as followsIn a category ofThe variance of all color difference values in the sequence of color difference values for each target color point,is as followsIn a category ofThe difference between the maximum value and the minimum value of all color difference values in the sequence of color difference values for each target color point,is as followsIn a category ofThe mean of all color difference values in the sequence of color difference values for each target color point,is as followsIn a category ofThe suitability of the individual target color points.
Further, the step of calculating the suitability degree of each color point of the initial class according to the suitability degrees of all the target color points comprises:
obtaining the appropriate degrees of all target color points according to the color difference value sequence of all target color points in each initial category; fitting all target color points and the suitability degrees of all the target color points by a least square method to obtain a suitability degree prediction formula, wherein the suitability degree prediction formula is a ternary cubic polynomial; and obtaining the appropriateness of all the color points of each initial category through an appropriateness prediction formula according to the brightness values and the color values of the color points.
Further, the step of clustering all pixel points in the ferrograph image according to all color points corresponding to the final category to obtain segmentation results of all pixel points in the ferrograph image includes:
obtaining each color point contained in each final category, and obtaining all pixel points corresponding to each color point in the ferrographic image, namely obtaining all pixel points corresponding to all color points contained in each final category; and recording a set formed by all pixel points corresponding to a final category as a category of the ferrographic image to obtain all categories of the ferrographic image, wherein the categories are used as segmentation results of all pixel points in the ferrographic image.
The embodiment of the invention at least has the following beneficial effects:
by changing a category division method of a k-means algorithm in clustering, namely performing category division based on a minimum color difference value, a clustering result obtained by each iteration is more consistent with human vision; and when a new clustering center is obtained before each iteration, based on the appropriateness of the color points, the new clustering center is obtained by a screening and fitting method, so that the iteration is performed, the final segmentation effect can better accord with the expected segmentation effect of human vision, the segmentation effect of the iron spectrum image of the abrasive dust based on color segmentation is improved, and a better basis is provided for the wear condition monitoring and fault diagnosis of equipment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart illustrating steps of a method for intelligently segmenting a ferrographic image based on machine vision according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and effects of the intelligent segmentation method for ferrographic images based on machine vision according to the present invention will be provided with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The specific scheme of the intelligent segmentation method of the ferrographic image based on machine vision provided by the invention is specifically described below with reference to the accompanying drawings.
Referring to fig. 1, a flowchart illustrating steps of a method for intelligent segmentation of ferrographic images based on machine vision according to an embodiment of the present invention is shown, where the method includes the following steps:
s001, obtaining a ferrogram image through a microscopic module, and converting all pixel points in the ferrogram image into color points in a Lab color space.
It should be noted that the obtained ferrographic image is used for subsequent analysis, and since the color distribution in the Lab color space better conforms to the visual distribution of human eyes, the ferrographic image is converted from the RGB color space to the Lab color space, so that a better color clustering result is obtained based on the converted ferrographic image in the subsequent process.
(1) And obtaining a ferrographic image through a microscopic module.
The specific method comprises the following steps: the method comprises the steps of collecting an oil sample in equipment, placing the oil sample on a microscopic observation platform, adopting a magnification of 100 times in the scheme, and obtaining an iron spectrum image of abrasive dust in the oil sample by a microscopic method, wherein an implementer can adjust the magnification of the microscopic observation platform according to a specific implementation scene.
(2) And converting all pixel points in the ferrograph image into color points in the Lab color space.
The specific method comprises the following steps: converting each pixel point in the ferrogram image from RGB color space to XYZ color space, and then converting from XYZ color space to Lab color space to obtain the brightness value of each pixel point in the ferrogram imageSum color valueAnd then converting all pixel points in the ferrograph image into color points in the Lab color space, wherein a plurality of pixel points in the ferrograph image correspond to one color point in the Lab color space, and conversely, one color point in the Lab color space corresponds to a plurality of pixel points in the ferrograph image.
And S002, obtaining all initial clustering centers, calculating a first color difference value according to the tolerance of the initial clustering centers, and obtaining all initial categories of all color points according to all the initial clustering centers and the first color difference value.
It should be noted that, because the existing method for clustering the ferrographic images based on colors and realizing color segmentation of the ferrographic images can cause pixel points in the ferrographic images, which should belong to the same category, to be classified into different categories, resulting in poor color segmentation results of the ferrographic images. The existing clustering method is only based on color difference for clustering, the color difference is not considered and does not represent the difference of human vision, although the color values of the pixel points of the ferrographic image of one grinding dust are different, the color values of the grinding dust during human vision are approximate, so that when color clustering segmentation is carried out, clustering needs to be carried out according to the tolerance of the color points in order to obtain the color segmentation result of the ferrographic image which is more in line with the human vision effect, therefore, the color difference value needs to be calculated according to the tolerance of the color points, and clustering is further carried out based on the color difference value of the color points.
(1) All initial cluster centers are obtained.
From all color points in the Lab color space, a first number of color points, denoted as initial cluster centers, is randomly selected, which in this embodiment is 3.
(2) And acquiring the latitude of the color points.
It should be noted that, in the Lab color space, the variation range of the color difference which is not perceived by human eyes is called color latitude, and therefore, when the difference between the color of one color point and the color of another color point is smaller than the latitude, the difference between the colors of the two color points is not perceived by human eyes, and therefore, when clustering ferrographic images based on colors, the color points having the color difference smaller than the latitude can be clustered together.
In this embodiment, the latitude of the color point is obtained according to the macadam circle of the color point, which specifically includes: and acquiring the coordinates of the color points on the chromaticity diagram, acquiring corresponding areas of the color points on the chromaticity diagram according to the coordinates of the color points and MacAdam circles of the color points, and recording the areas as the tolerance of the color points.
(3) And calculating a first chromatic aberration value according to the tolerance of the initial clustering center.
It should be noted that, in consideration of the recognition of the color difference by the human vision, the color difference has a certain color tolerance range, so in order to obtain a color segmentation result of the iron spectrum image which is expected by the human vision, the scheme obtains the color difference value calculation formula of the color point according to the tolerance of the color point.
And recording the color difference value between the initial clustering center and the color point in the Lab color space as a first color difference value, wherein the calculation method of the first color difference value comprises the following steps: obtaining the tolerance of the initial clustering center, judging whether the color point in the Lab color space is within the tolerance of the initial clustering center or not, and if the color point in the Lab color space is within the tolerance of the initial clustering center, determining that the first hue difference value is(ii) a Otherwise, the first color difference value is:
wherein,is the luminance value of the initial cluster center,is the brightness value of a color point in the Lab color space,is the color value of the initial cluster center,is the color value of a color point in the Lab color space,is a first hue value;
(4) And obtaining all initial categories of all color points according to all initial cluster centers and the first color difference value.
Calculating the color difference value of each initial clustering center and each color point in the Lab color space; according to the first color difference value of the color point and each initial clustering center, based on the minimum color difference value principle of the K-means clustering algorithm, the color point is divided into the initial clustering centers corresponding to the minimum first color difference value, a set formed by all the color points corresponding to each initial clustering center is marked as an initial category, and all the initial categories of all the color points are obtained.
S003, all the target color points of each initial class are obtained, the color difference value sequence of each target color point is calculated, the appropriateness degree of each target color point is obtained according to the color difference value sequence of the target color points, the appropriateness degree of each color point of each initial class is obtained according to the appropriateness degree of each target color point of each initial class, and then a new clustering center of each initial class is obtained.
(1) All target color points of each initial class are obtained.
Randomly extracting a second number of color points from all the color points of each initial class, and taking the sampled color points as target color points of the initial class, wherein the second number is the product of the number of all the color points of each initial class and a sampling percentage, and in the embodiment, the sampling percentage is。
(2) A sequence of color difference values for each target color point is calculated.
In this embodiment, the color difference value between the target color point and the other color points in the initial category is recorded as a second color difference value, and the calculation method of the second color difference value is as follows: obtaining the tolerance of the target color point, judging whether other color points in the initial category are within the tolerance of the target color point, and if the other color points in the initial category are within the tolerance of the target color point, determining that the second color difference value is(ii) a Otherwise the second hue value is:
wherein,for the luminance value of the point of the target color,the luminance values of the other color points in the initial class,is the color value of the target color point,the color values of the other color points in the initial category,is the second color difference value.
And arranging all second color difference values of the target color point and all other color points in the initial category from small to large, and recording the formed sequence as a color difference value sequence of the target color point.
(3) And obtaining the appropriate degree of the target color point according to the color difference value sequence of the target color point.
It should be noted that, the more stable the sequence of color difference values of the target color point, and the more consistent the color difference values of the target color point and other color points in the initial category, the more consistent the difference of all color points in the initial category in human vision, the more suitable the target color point is as a new clustering center, so that the suitable degree of the target color point is obtained according to the sequence of color difference values of the target color point, and thus a new clustering center is obtained.
In this embodiment, the first step is calculatedIn an initial categoryThe suitability of the target color point is determined byIn an initial categoryThe color difference value sequence of each target color point is obtained, and the specific calculation formula is as follows:
wherein,an exponential function with a natural constant as the base is represented,is as followsIn a category ofThe variance of all color difference values in the sequence of color difference values for each target color point,is as followsIn a category ofThe difference of the maximum value and the minimum value of all color difference values in the sequence of color difference values of the target color point,is as followsIn a category ofThe mean of all color difference values in the sequence of color difference values for each target color point,is as followsIn a category ofThe suitability of the individual target color points.
Wherein,the smaller the sequence of color difference values indicating the more stable the target color point, the more appropriate the target color point isThe larger the target color point is, the more suitable the target color point is as a new clustering center;the smaller the sequence of color difference values indicating the more stable the target color point, the more appropriate the target color point isThe larger the target color point is, the more suitable the target color point is as a new clustering center;the smaller the sequence of color difference values indicating the more consistent the target color point, the more appropriate the target color point isThe larger the target color point, the more suitable it is as a new cluster center.
(4) And according to the appropriateness of all target color points in each initial category, obtaining the appropriateness of all color points in each initial category, and further obtaining a new cluster center of each initial category.
In the embodiment, the appropriate degrees of all the target color points are obtained according to the color difference value sequence of all the target color points in each initial category; fitting all target color points and the suitable degrees of all the target color points by a least square method to obtain a suitable degree prediction formula; according to a fitness degree prediction formula, the fitness degree prediction formula is a ternary cubic polynomial; and obtaining the appropriateness of all the color points of each initial category through an appropriateness prediction formula according to the brightness values and the color values of the color points, and marking the color point with the maximum appropriateness in all the color points of each initial category as a new cluster center.
And S004, obtaining the final classification of all color points.
In this embodiment, the new cluster center is used as the initial cluster center, and S002 and S003 are repeatedly executed until the initial cluster center is not changed any more, and the obtained initial category is used as the final category.
And S005, clustering all pixel points in the ferrograph image according to the final category to obtain the segmentation result of all pixel points in the ferrograph image.
Obtaining each color point contained in each final category, and obtaining all pixel points corresponding to the color points in the ferrogram image, namely obtaining all pixel points corresponding to all the color points contained in each final category; and recording a set formed by all pixel points corresponding to a final category as a category of the ferrographic image to obtain all categories of the ferrographic image, wherein the categories are used as segmentation results of all the pixel points in the ferrographic image, and the segmentation results of the ferrographic image are more in line with the segmentation results expected by human vision.
In summary, in the invention, all pixel points in the ferrogram image are converted into color points in the Lab color space; obtaining all initial categories of all color points according to the initial clustering center and the first color difference value; obtaining the appropriateness of each target color point in each initial category, obtaining the appropriateness of all color points in each initial category according to an appropriateness prediction formula, and further obtaining a new clustering center; obtaining the final categories of all color points through multiple iterations; and obtaining a segmentation result of the ferrographic image according to the final classification of all the color points. The method of the invention enables the segmentation result of the ferrographic image to better accord with the expected segmentation effect of human eye vision, and provides better basis for the wear condition monitoring and fault diagnosis of equipment.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; the modifications or substitutions do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present application, and are included in the protection scope of the present application.
Claims (4)
1. The intelligent segmentation method of the ferrographic image based on the machine vision is characterized by comprising the following steps:
obtaining an iron spectrum image of abrasive dust in the oil sample through a microscopic module; converting all pixel points in the ferrographic image into color points in a Lab color space;
randomly selecting a first number of color points from all color points in the Lab color space, and recording the color points as initial clustering centers;
s1: obtaining all initial classes of all color points according to all initial cluster centers, including:
calculating all first color difference values of all initial clustering centers and all color points in the Lab color space; clustering all the color points by using a K-means clustering algorithm according to all the first color difference values and the initial clustering centers to obtain all initial categories of all the color points;
s2, obtaining all new clustering centers according to all initial categories, including:
for any one initial class, randomly sampling a plurality of color points from all the color points of the initial class, and marking as target color points; calculating all second color difference values of all target color points and all other color points in the initial category, and obtaining the appropriate degree of the target color points according to all the second color difference values; calculating the appropriateness of each color point of the initial category according to the appropriateness of all target color points; recording the color point with the maximum suitable degree in all the color points of the initial category as a new clustering center; for all initial categories, obtaining all new clustering centers;
repeatedly executing S1 and S2 by taking the new clustering center as an initial clustering center until the initial clustering center is not changed any more, and taking the obtained initial category as a final category;
clustering all pixel points in the ferrograph image according to all color points corresponding to the final category to obtain segmentation results of all pixel points in the ferrograph image;
the step of obtaining the appropriate degree of the target color point according to all the second color difference values comprises:
recording a sequence formed by all second color difference values of the target color point and all other color points in the initial category as a color difference value sequence of the target color point, and obtaining the appropriateness degree of the target color point according to the color difference value sequence of the target color point, wherein the calculation formula of the appropriateness degree of the target color point is as follows:
wherein,an exponential function with a natural constant as the base is represented,is a firstIn a first categoryThe variance of all color difference values in the sequence of color difference values for each target color point,is as followsIn a category ofThe difference of the maximum value and the minimum value of all color difference values in the sequence of color difference values of the target color point,is a firstIn a first categoryIn a sequence of color difference values of individual target color pointsThe average of all the color difference values is,is as followsIn a category ofThe suitability of the individual target color point;
the step of calculating the suitability of each color point of the initial class according to the suitability of all target color points comprises:
obtaining the appropriate degrees of all target color points according to the color difference value sequence of all target color points in each initial class; fitting all target color points and the suitable degrees of all the target color points by a least square method to obtain a suitable degree prediction formula; and obtaining the appropriateness of all the color points of each initial category through an appropriateness prediction formula according to the brightness values and the color values of the color points.
2. The intelligent segmentation method for ferrographic images based on machine vision according to claim 1, wherein the steps of the first color difference value and the second color difference value comprise:
obtaining the tolerance of the initial clustering center, judging whether the color point in the Lab color space is within the tolerance of the initial clustering center or not, and if the color point in the Lab color space is within the tolerance of the initial clustering center, determining that the first hue difference value is(ii) a Otherwise, the first color difference value is:
wherein,is the luminance value of the initial cluster center,is the brightness value of a color point in the Lab color space,is the color value of the initial cluster center,is the color value of a color point in the Lab color space,is a first hue difference value;
obtaining the tolerance of the target color point, judging whether other color points in the initial category are within the tolerance of the target color point, and if the other color points in the initial category are within the tolerance of the target color point, determining that the second color difference value is(ii) a Otherwise the second hue value is:
wherein,for the luminance value of the point of the target color,the luminance values of the other color points in the initial class,is the color value of the target color point,the color values of the other color points in the initial category,is a second color difference value;
the method for acquiring the tolerance comprises the following steps: and acquiring the coordinates of the color points on the chromaticity diagram, acquiring corresponding areas of the color points on the chromaticity diagram according to the coordinates of the color points and MacAdam circles of the color points, and recording the areas as the tolerance of the color points.
3. The machine-vision-based intelligent segmentation method for ferrographic images as claimed in claim 1, wherein the step of clustering all color points by using a K-means clustering algorithm according to all the first color difference values and the initial clustering centers to obtain all the initial classes of all the color points comprises:
according to the first color difference value of the color point and each initial clustering center, based on the minimum color difference value principle of the K-means clustering algorithm, the color point is divided into the initial clustering centers corresponding to the minimum first color difference value, a set formed by all the color points corresponding to each initial clustering center is marked as an initial category, and all the initial categories of all the color points are obtained.
4. The intelligent segmentation method for a ferrographic image based on machine vision according to claim 1, wherein the step of clustering all pixel points in the ferrographic image according to all color points corresponding to the final classification to obtain the segmentation result of all pixel points in the ferrographic image comprises:
obtaining each color point contained in each final category, and obtaining all pixel points corresponding to each color point in the ferrogram image, namely obtaining all pixel points corresponding to all color points contained in each final category; and recording a set formed by all pixel points corresponding to a final category as a category of the ferrographic image to obtain all categories of the ferrographic image, wherein the categories are used as segmentation results of all pixel points in the ferrographic image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211241929.4A CN115311276B (en) | 2022-10-11 | 2022-10-11 | Intelligent segmentation method for ferrographic image based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211241929.4A CN115311276B (en) | 2022-10-11 | 2022-10-11 | Intelligent segmentation method for ferrographic image based on machine vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115311276A CN115311276A (en) | 2022-11-08 |
CN115311276B true CN115311276B (en) | 2023-01-17 |
Family
ID=83868114
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211241929.4A Active CN115311276B (en) | 2022-10-11 | 2022-10-11 | Intelligent segmentation method for ferrographic image based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115311276B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118691603A (en) * | 2024-08-22 | 2024-09-24 | 浙江海都电气有限公司 | Method, equipment and medium for detecting abrasion of plug-in column of ammeter connector |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020102017A1 (en) * | 2000-11-22 | 2002-08-01 | Kim Sang-Kyun | Method and apparatus for sectioning image into plurality of regions |
CN102360494A (en) * | 2011-10-18 | 2012-02-22 | 中国科学院自动化研究所 | Interactive image segmentation method for multiple foreground targets |
CN110910417A (en) * | 2019-10-29 | 2020-03-24 | 西北工业大学 | Weak and small moving target detection method based on super-pixel adjacent frame feature comparison |
-
2022
- 2022-10-11 CN CN202211241929.4A patent/CN115311276B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020102017A1 (en) * | 2000-11-22 | 2002-08-01 | Kim Sang-Kyun | Method and apparatus for sectioning image into plurality of regions |
CN102360494A (en) * | 2011-10-18 | 2012-02-22 | 中国科学院自动化研究所 | Interactive image segmentation method for multiple foreground targets |
CN110910417A (en) * | 2019-10-29 | 2020-03-24 | 西北工业大学 | Weak and small moving target detection method based on super-pixel adjacent frame feature comparison |
Also Published As
Publication number | Publication date |
---|---|
CN115311276A (en) | 2022-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107452010B (en) | Automatic cutout algorithm and device | |
Bora | Importance of image enhancement techniques in color image segmentation: a comprehensive and comparative study | |
Vezhnevets et al. | A survey on pixel-based skin color detection techniques | |
US7528991B2 (en) | Method of generating a mask image of membership of single pixels to certain chromaticity classes and of adaptive improvement of a color image | |
JP2008546990A (en) | How to split white blood cells | |
CN106803257B (en) | Method for segmenting disease spots in crop disease leaf image | |
CN107180439B (en) | Color cast characteristic extraction and color cast detection method based on Lab chromaticity space | |
Hervé et al. | Statistical color texture descriptors for histological images analysis | |
CN114648594B (en) | Textile color detection method and system based on image recognition | |
CN115311276B (en) | Intelligent segmentation method for ferrographic image based on machine vision | |
CN114494256A (en) | Electric wire production defect detection method based on image processing | |
Ganesan et al. | Satellite image segmentation based on YCbCr color space | |
CN111489346A (en) | Full-reference image quality evaluation method and system | |
Ganesan et al. | YIQ color space based satellite image segmentation using modified FCM clustering and histogram equalization | |
CN111428814A (en) | Blended yarn color automatic identification matching method | |
US8131077B2 (en) | Systems and methods for segmenting an image based on perceptual information | |
CN111079637A (en) | Method, device and equipment for segmenting rape flowers in field image and storage medium | |
Yang et al. | EHNQ: Subjective and objective quality evaluation of enhanced night-time images | |
Naccari et al. | Natural scenes classification for color enhancement | |
Hu et al. | No reference quality assessment for Thangka color image based on superpixel | |
CN110334598A (en) | A kind of palm grain identification method and device | |
Khediri et al. | Comparison of image segmentation using different color spaces | |
CN115082741A (en) | Waste textile classifying method based on image processing | |
CN114973131A (en) | Full-automatic fisheye opening and closing indicator state identification method and system | |
CN113902013A (en) | Hyperspectral classification method based on three-dimensional convolutional neural network and superpixel segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |