CN116523922B - Bearing surface defect identification method - Google Patents

Bearing surface defect identification method Download PDF

Info

Publication number
CN116523922B
CN116523922B CN202310814579.4A CN202310814579A CN116523922B CN 116523922 B CN116523922 B CN 116523922B CN 202310814579 A CN202310814579 A CN 202310814579A CN 116523922 B CN116523922 B CN 116523922B
Authority
CN
China
Prior art keywords
defect
pixel point
target
edge pixel
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310814579.4A
Other languages
Chinese (zh)
Other versions
CN116523922A (en
Inventor
张园
钟庆龙
席永刚
陈羽航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sandmann Foundry Co Ltd
Original Assignee
Shanghai Sandmann Foundry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sandmann Foundry Co Ltd filed Critical Shanghai Sandmann Foundry Co Ltd
Priority to CN202310814579.4A priority Critical patent/CN116523922B/en
Publication of CN116523922A publication Critical patent/CN116523922A/en
Application granted granted Critical
Publication of CN116523922B publication Critical patent/CN116523922B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention relates to the technical field of material testing and analysis, in particular to a method for identifying defects on a bearing surface, which comprises the following steps: the method comprises the steps of obtaining a bearing surface visible light image of a bearing to be detected by an optical means, and graying the bearing surface visible light image by utilizing a visible light means; carrying out bearing surface extraction, identification and segmentation on the bearing surface gray level image; performing defect identification on the bearing surface area image; determining shape and size defect information corresponding to each target region in the target region set; generating dust cover surface defect information; bearing surface defect information is generated that characterizes a defect condition of the bearing to be inspected. The invention solves the technical problem of low accuracy of defect identification on the surface of the bearing by optical means, particularly by utilizing visible light means to carry out material analysis and test, and has the technical effect of improving the accuracy of defect identification on the surface of the bearing, and is mainly applied to defect identification on the surface of the bearing.

Description

Bearing surface defect identification method
Technical Field
The invention relates to the technical field of material testing and analysis, in particular to a bearing surface defect identification method.
Background
Bearings are used in a wide variety of mechanical industries, such as automotive rear wheels, transmissions, general purpose motors, internal combustion engines, construction machinery, railway vehicles, handling machinery, and the like. Therefore, when there is a defect in the bearing, the damage is not negligible. For example, when a bearing of a rear wheel of an automobile is defective, normal running of the automobile is often affected, and danger is brought to a driver. Therefore, it is important to identify defects on the bearing surface. Currently, when defect identification is performed on the surface of a bearing, the following methods are generally adopted: first, a bearing surface visible light image and a template image of a bearing to be detected are acquired. Then, the similarity of the bearing surface visible light image and the template image is determined by image matching. And finally, determining the defect condition of the bearing to be detected according to the similarity. The template image may be a surface image of a bearing of the same model specification as the bearing to be detected, in which no defect occurs.
However, when the above manner is adopted, there are often the following technical problems:
because the image matching has higher requirements on the defect threshold precision, the noise point is sensitive, and the defect detection effect on the local small part is poor, the similarity of the visible light image and the template image of the bearing surface is determined directly through the image matching, and then the defect condition of the bearing to be detected is determined, so that the defect recognition accuracy of the bearing surface is low.
Disclosure of Invention
The summary of the invention is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. The summary of the invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In order to solve the technical problem of low accuracy of defect identification on the surface of a bearing, the invention provides a method for identifying the defect on the surface of the bearing.
The invention provides a bearing surface defect identification method, which comprises the following steps:
obtaining a bearing surface visible light image of a bearing to be detected, and graying the bearing surface visible light image to obtain a bearing surface gray image;
carrying out bearing surface extraction, identification and segmentation on the bearing surface gray level image to obtain a bearing surface area image corresponding to a bearing surface area in the bearing surface gray level image;
performing defect identification on the bearing surface area image to obtain a target area set and an area shape size defect degree set, wherein the target area set comprises: a dust cap area;
determining shape and size defect information corresponding to each target area in the target area set according to the target area set and the area shape and size defect degree set to obtain a shape and size defect information set;
Generating dust cover surface defect information according to the dust cover area and the trained dust cover surface defect identification network;
and generating bearing surface defect information representing the defect condition of the bearing to be detected according to the shape and size defect information set and the dust cover surface defect information.
Further, the performing defect recognition on the bearing surface area image to obtain a target area set and an area shape size defect degree set includes:
performing edge detection on the bearing surface area image to obtain a target edge pixel point set;
screening out an actual edge pixel point set from the target edge pixel point set;
determining an actual circle center pixel point according to the bearing surface area image and the actual edge pixel point set;
dividing the actual edge pixel points in the actual edge pixel point set according to the actual edge pixel point set and the actual circle center pixel points to obtain an actual edge pixel point class set;
and determining the target region set and the region shape and size defect degree set according to the actual edge pixel point class set and the actual circle center pixel point.
Further, the step of screening the actual edge pixel point set from the target edge pixel point set includes:
when a neighborhood edge pixel point exists in a first neighborhood corresponding to a target edge pixel point in the target edge pixel point set, determining the target edge pixel point as a first edge pixel point, wherein the first neighborhood is a preset neighborhood, and the neighborhood edge pixel point is an edge pixel point in the neighborhood;
determining the possibility of the preliminary edge point corresponding to the first edge pixel point according to the gray value corresponding to the first edge pixel point and the gray value corresponding to each adjacent edge pixel point in a second adjacent area corresponding to the first edge pixel point, wherein the second adjacent area is a preset adjacent area;
connecting neighborhood edge pixel points in a second neighborhood corresponding to the first edge pixel point to obtain an edge arc corresponding to the first edge pixel point;
making a vertical line of an edge arc line corresponding to the first edge pixel point by passing through the first edge pixel point, and determining a characteristic direction corresponding to the first edge pixel point;
determining a gray difference characteristic value corresponding to the first edge pixel point according to the gray value corresponding to each neighborhood pixel point and the gray value corresponding to the first edge pixel point on a straight line where the characteristic direction corresponding to the first edge pixel point is located, wherein the neighborhood pixel point is a pixel point in a second neighborhood;
Determining characteristic directions and gray difference characteristic values corresponding to all neighborhood pixel points on an edge arc line corresponding to the first edge pixel point;
determining the possibility of the corrected edge point corresponding to the first edge pixel point according to the possibility of the preliminary edge point corresponding to the first edge pixel point, the characteristic direction and the gray scale difference characteristic value, and the characteristic direction and the gray scale difference characteristic value corresponding to each neighborhood pixel point on the edge arc line corresponding to the first edge pixel point;
and when the probability of the corrected edge point corresponding to the first edge pixel point is larger than a preset threshold value of the probability of the edge point, determining the first edge pixel point as an actual edge pixel point.
Further, the determining the actual center pixel point according to the bearing surface area image and the actual edge pixel point set includes:
for each pixel point in the bearing surface area image, determining the circle center confidence corresponding to the pixel point according to the number of the actual edge pixel points in the actual edge pixel point set and the characteristic direction corresponding to the actual edge pixel points in the actual edge pixel point set;
and determining the pixel point with the maximum circle center confidence coefficient corresponding to the bearing surface area image as the actual circle center pixel point.
Further, the determining, according to the number of the actual edge pixels in the actual edge pixel set and the feature direction corresponding to the actual edge pixels in the actual edge pixel set, the circle center confidence corresponding to the pixels includes:
when a straight line of the characteristic direction corresponding to the actual edge pixel point in the actual edge pixel point set passes through the pixel point, determining the actual edge pixel point as a reference edge pixel point corresponding to the pixel point;
determining the number of the reference edge pixel points corresponding to the pixel points as the reference number corresponding to the pixel points;
and determining the ratio of the reference number corresponding to the pixel points to the number of the actual edge pixel points in the actual edge pixel point set as the circle center confidence corresponding to the pixel points.
Further, the dividing the actual edge pixel points in the actual edge pixel point set according to the actual edge pixel point set and the actual circle center pixel point to obtain an actual edge pixel point category set includes:
determining the Euclidean distance between the actual edge pixel point and the actual circle center pixel point as the target distance corresponding to the actual edge pixel point according to each actual edge pixel point and the actual circle center pixel point in the actual edge pixel point set;
And carrying out clustering division on the actual edge pixel points in the actual edge pixel point set according to the target distances corresponding to the actual edge pixel points in the actual edge pixel point set to obtain the actual edge pixel point class set.
Further, the determining the target region set and the region shape size defect degree set according to the actual edge pixel point category set and the actual circle center pixel point includes:
connecting each actual edge pixel point in each actual edge pixel point category in the actual edge pixel point category set to obtain an actual edge corresponding to the actual edge pixel point category;
determining the area between the actual edges corresponding to two adjacent actual edge pixel point categories in the actual edge pixel point category set as a target area corresponding to the two actual edge pixel point categories, and obtaining the target area set;
and for each target region in the target region set, determining the shape and size defect degree of the region corresponding to the target region according to Euclidean distance between the actual edge pixel points in the two actual edge pixel point categories corresponding to the target region and the actual circle center pixel point, the number of the actual edge pixel points in the actual edge corresponding to the two actual edge pixel point categories corresponding to the target region, and the two standard distances corresponding to the target region, which are acquired in advance.
Further, the determining, according to the target region set and the region shape size defect degree set, shape size defect information corresponding to each target region in the target region set includes:
when the shape and size defect degree of the region corresponding to the target region is larger than a preset shape and size defect degree threshold, generating shape and size defect information representing that the shape and size of the target region have defects, and taking the shape and size defect information as the shape and size defect information corresponding to the target region;
and when the shape and size defect degree of the region corresponding to the target region is smaller than or equal to the shape and size defect degree threshold, generating shape and size defect information representing that the shape and size of the target region are not defective, and taking the shape and size defect information as the shape and size defect information corresponding to the target region.
Further, the generating dust cover surface defect information according to the dust cover area and the trained dust cover surface defect identification network includes:
clustering and dividing the pixel points in the dust cover area according to the gray values corresponding to the pixel points in the dust cover area to obtain abnormal pixel point categories and background pixel point categories;
Screening a non-noise abnormal point set from the abnormal pixel point category;
determining a defect area set according to the non-noise abnormal point set;
for each defect region in the defect region set, determining the similarity between the defect region and each template image according to the defect region and each template image in a template image set acquired in advance, and obtaining a similarity set corresponding to the defect region;
determining the maximum similarity in a similarity set corresponding to each defect region in the defect region set as the target similarity corresponding to the defect region;
screening a target defect area set from the defect area set according to the target similarity corresponding to the defect area in the defect area set;
according to each target defect region in the target defect region set, determining a gray level co-occurrence matrix corresponding to the target defect region;
determining a gray entropy value corresponding to each target defect region in the target defect region set according to the gray co-occurrence matrix corresponding to each target defect region in the target defect region set;
determining a background area according to the background pixel point category;
Determining a gray entropy value corresponding to the background area;
determining the real initial defect probability corresponding to the target defect region according to the gray entropy value corresponding to the background region and the gray entropy value corresponding to each target defect region in the target defect region set;
acquiring a symmetrical region corresponding to each target defect region in the target defect region set;
for each target defect region in the target defect region set, determining a target correlation corresponding to the target defect region according to the target defect region and a symmetrical region corresponding to the target defect region;
determining the real defect probability corresponding to each target defect region in the target defect region set according to the real initial defect probability and the target correlation corresponding to the target defect region;
when the real defect probability corresponding to the target defect region in the target defect region set is larger than a preset real defect probability threshold, determining the target defect region as a real defect region;
extracting features of the real defect area to obtain defect feature vectors corresponding to the real defect area;
inputting the defect feature vector corresponding to the real defect area into a dust cover surface defect identification network, and determining the defect type corresponding to the real defect area through the dust cover surface defect identification network;
And generating the surface defect information of the dust cover according to the defect type corresponding to the real defect area.
Further, the training process of the dust cover surface defect recognition network comprises the following steps:
constructing a dust cover surface defect identification network;
acquiring a sample defect area set, wherein the defect category corresponding to a sample defect area in the sample defect area set is known;
extracting characteristics of each sample defect region in the sample defect region set to obtain a defect characteristic vector corresponding to the sample defect region;
and training the dust cover surface defect recognition network by utilizing the defect types and the defect feature vectors corresponding to each sample defect region in the sample defect region set to obtain a trained dust cover surface defect recognition network.
The invention has the following beneficial effects:
according to the method for identifying the defects on the surface of the bearing, the material analysis and the test are carried out by utilizing an optical means, and particularly, the technical problem that the accuracy of identifying the defects on the surface of the bearing is low is solved, and the technical effect of improving the accuracy of identifying the defects on the surface of the bearing is achieved. Firstly, obtaining a bearing surface visible light image of a bearing to be detected, and graying the bearing surface visible light image to obtain a bearing surface gray image. The bearing surface visible light image is used for shooting the surface of the bearing to be detected, so that the subsequent analysis of the bearing surface visible light image can be facilitated, and the defect condition of the bearing to be detected can be determined. And then, carrying out bearing surface extraction, identification and segmentation on the bearing surface gray level image to obtain a bearing surface area image corresponding to the bearing surface area in the bearing surface gray level image. In practical situations, the visible light image of the bearing surface often captures not only the surface of the bearing to be detected, but also the platform area where the bearing to be detected is placed. For example, the platform area in which the bearing to be detected is placed may be a conveyor belt. Bearing surface areas are identified and segmented from the bearing surface gray scale image. It may be convenient to analyze only the bearing surface area later. The platform area for placing the bearing to be detected is not needed to be analyzed, so that the calculated amount is reduced, and the occupation of calculation resources is reduced. Then, carrying out defect identification on the bearing surface area image to obtain a target area set and an area shape size defect degree set, wherein the target area set comprises: a dust cap area. And then, determining the shape size defect information corresponding to each target area in the target area set according to the target area set and the area shape size defect degree set to obtain a shape size defect information set. In practical situations, the bearing to be detected often comprises a plurality of target areas, and the shape and the size of different target areas are often different, so that the accuracy of determining the shape and the size defect information corresponding to the target areas can be improved by determining each target area and accurately determining the shape and the size defect degree of the area corresponding to each target area. And then generating dust cover surface defect information according to the dust cover area and the trained dust cover surface defect identification network. In practice, since some preset patterns are often included in the dust cap area, these patterns are sometimes misjudged as defects. And the target area except the dust cover area in the target area set often does not comprise some preset patterns. Therefore, only the dust cover area needs to be subjected to further defect identification, and the accuracy and the efficiency of defect identification on the bearing surface are improved. And finally, generating bearing surface defect information representing the defect condition of the bearing to be detected according to the shape and size defect information set and the dust cover surface defect information. Therefore, the invention solves the technical problem of low accuracy of defect identification on the surface of the bearing by optical means, particularly by utilizing visible light means to perform material analysis and test, and has the technical effect of improving the accuracy of defect identification on the surface of the bearing.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of some embodiments of a method of identifying bearing surface defects in accordance with the present invention;
FIG. 2 is a schematic view of the upper surface of a bearing to be inspected according to the present invention;
FIG. 3 is a schematic diagram of a set of target regions according to the present invention;
fig. 4 is a schematic view of a target defect area and a symmetric area according to the present invention.
Wherein reference numerals in fig. 2 include: the upper surface 201 of the bearing to be inspected.
The reference numerals in fig. 3 include: an inner ring region 301, a dust cap region 302, and an outer ring region 303.
The reference numerals in fig. 4 include: a target defect area 401 and a symmetric area 402.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description is given below of the specific implementation, structure, features and effects of the technical solution according to the present invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a bearing surface defect identification method, which comprises the following steps:
obtaining a bearing surface visible light image of a bearing to be detected, and graying the bearing surface visible light image to obtain a bearing surface gray image;
carrying out bearing surface extraction, identification and segmentation on the bearing surface gray level image to obtain a bearing surface area image corresponding to a bearing surface area in the bearing surface gray level image;
performing defect identification on the bearing surface area image to obtain a target area set and an area shape and size defect degree set;
determining shape and size defect information corresponding to each target area in the target area set according to the target area set and the area shape and size defect degree set to obtain a shape and size defect information set;
generating dust cover surface defect information according to the dust cover area and the trained dust cover surface defect identification network;
and generating bearing surface defect information representing the defect condition of the bearing to be detected according to the shape and size defect information set and the dust cover surface defect information.
The following detailed development of each step is performed:
referring to FIG. 1, a flow chart of some embodiments of a bearing surface defect identification method according to the present invention is shown. The bearing surface defect identification method comprises the following steps:
step S1, obtaining a bearing surface visible light image of a bearing to be detected, and graying the bearing surface visible light image to obtain a bearing surface gray image.
In some embodiments, a bearing surface visible light image of a bearing to be detected may be obtained, and the bearing surface visible light image may be grayed to obtain a bearing surface gray image.
The bearing to be detected may be a bearing requiring defect detection. The bearing surface visible light image may be an image of the upper or lower surface of the bearing to be inspected. For example, as shown in fig. 2, the bearing surface visible light image may be an image of the upper surface 201 of the bearing to be inspected.
As an example, first, a bearing surface visible light image may be acquired by an industrial camera at a location where the light source is fixed. Wherein the bearing surface visible light image may be an RGB image. Then, the bearing surface visible light image can be subjected to gray-scale treatment by weighted gray-scale treatment, so as to obtain the bearing surface gray-scale image.
And S2, carrying out bearing surface extraction, identification and segmentation on the bearing surface gray level image to obtain a bearing surface area image corresponding to the bearing surface area in the bearing surface gray level image.
In some embodiments, the bearing surface gray level image may be subjected to bearing surface extraction, identification and segmentation, so as to obtain a bearing surface area image corresponding to a bearing surface area in the bearing surface gray level image.
The bearing surface area can be an area where the bearing to be detected is located, which is shot in the bearing surface gray level image. The bearing surface area image may be an image of the bearing surface area.
As an example, the bearing surface area image may be obtained by segmenting the bearing surface gray image by using the bearing surface area as a foreground and an area other than the bearing surface area in the bearing surface gray image as a background by the oxford thresholding method. The area outside the bearing surface area can be a photographed platform area where the bearing to be detected is placed. For example, the area outside the bearing surface area may be a photographed conveyor belt on which the bearing to be inspected is placed.
And S3, carrying out defect identification on the bearing surface area image to obtain a target area set and an area shape and size defect degree set.
In some embodiments, defect identification may be performed on the bearing surface area image to obtain a target area set and an area shape size defect set.
Wherein, the target area set may include: an inner ring region, a dust cap region, and an outer ring region. The inner ring region may be a region where the inner ring of the bearing surface to be detected is located. The dust cap area may be an area where the dust cap of the bearing surface to be detected is located. The outer ring region may be a region where the outer ring of the bearing surface to be detected is located. For example, as shown in fig. 3, the set of target regions may include: an inner ring region 301, a dust cap region 302, and an outer ring region 303. The region shape size defectivity in the set of region shape size defectivity may characterize a defectivity of a shape size of the target region.
As an example, this step may include the steps of:
and firstly, carrying out edge detection on the bearing surface area image to obtain a target edge pixel point set.
For example, edge detection can be performed on the bearing surface area image through an edge detection algorithm to obtain a target edge pixel point set. The edge detection algorithm may be a canny operator detection algorithm.
And step two, screening out an actual edge pixel point set from the target edge pixel point set.
The target edge pixel point set may include: a set of interference pixels and a set of actual edge pixels. The actual edge pixels in the actual edge pixel set may be actual edge pixels. The disturbing pixels in the set of disturbing pixels may be noise or defective pixels.
For example, this step may include the sub-steps of:
and a first sub-step of determining the target edge pixel point as a first edge pixel point when the neighborhood edge pixel point exists in the first neighborhood corresponding to the target edge pixel point in the target edge pixel point set.
The first neighborhood may be a preset neighborhood. For example, the first neighborhood may be an eight neighborhood. The neighborhood edge pixels may be edge pixels within a neighborhood.
And a second sub-step of determining the possibility of the preliminary edge point corresponding to the first edge pixel point according to the gray value corresponding to the first edge pixel point and the gray value corresponding to each neighborhood edge pixel point in the second neighborhood corresponding to the first edge pixel point.
The second neighborhood may be a preset neighborhood. For example, the second neighborhood may be a 5×5 neighborhood. The likelihood of the preliminary edge point corresponding to the first edge pixel point may initially characterize the likelihood that the first edge pixel point is an actual edge pixel point.
For example, the formula corresponding to determining the likelihood of the preliminary edge point corresponding to the first edge pixel point may be:
wherein, the liquid crystal display device comprises a liquid crystal display device,is->And the preliminary edge point possibility corresponding to the first edge pixel point.NIs the number of edge pixels in the second neighborhood corresponding to the first edge pixel, except for the first edge pixel.eIs a natural constant. />Is->Gray values corresponding to the first edge pixel points. />Is->First edge pixels and second edge pixels in the second neighborhoodiGray values corresponding to the pixel points of the neighborhood edges.
In the actual case of a device, in which the device,the smaller is, tend to indicate +.>Gray value +.>A second neighborhood corresponding to the first edge pixel point and excluding the first edge pixel pointiGray value corresponding to each neighborhood edge pixel point +.>The closer. Therefore, the smaller the difference between the first edge pixel point and the neighboring edge pixel point except for the first edge pixel point in the second neighboring region corresponding to the first edge pixel point. When the first edge pixel point and the sum of gray level difference values between the adjacent edge pixel points in the adjacent area corresponding to the first edge pixel point are + >The smaller the difference between the first edge pixel point and each neighboring edge pixel point in the neighboring domain corresponding to the first edge pixel point is, the smaller the difference is. And->Can realize->Normalization of->Preliminary edge point likelihood corresponding to the first edge pixel point>The value range of (1) is (0, 1)]The likelihood of the preliminary edge points corresponding to the first edge pixel points can be conveniently compared. First->Preliminary edge point likelihood corresponding to the first edge pixel point>The larger the first edge pixel point is, the more likely it is to be an actual edge pixel point.
And a third sub-step, connecting the neighborhood edge pixel points in the second neighborhood corresponding to the first edge pixel point to obtain an edge arc line corresponding to the first edge pixel point.
And a fourth sub-step, namely, making a vertical line of an edge arc line corresponding to the first edge pixel point through the first edge pixel point, and determining the characteristic direction corresponding to the first edge pixel point.
For example, two end points of an edge arc corresponding to the first edge pixel point can be connected to obtain a line segment. And (3) making a perpendicular line of the line segment by passing through the first edge pixel point, wherein an included angle between the perpendicular line and the horizontal direction can be used as a characteristic direction corresponding to the first edge pixel point.
And a fifth substep, determining a gray difference characteristic value corresponding to the first edge pixel point according to the gray value corresponding to each neighborhood pixel point and the gray value corresponding to the first edge pixel point on the straight line where the characteristic direction corresponding to the first edge pixel point is located.
The neighboring pixel point may be a pixel point in the second neighboring region.
For example, the formula corresponding to the gray difference feature value corresponding to the first edge pixel point may be determined as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,is->Gray scale difference characteristic values corresponding to the first edge pixel points.UIs->The first edge pixel points are located on the straight line of the corresponding characteristic direction>The number of neighboring pixels of the first edge pixels. />Is->The first edge pixel points are located on the straight line of the corresponding characteristic direction>First edge pixel pointIGray values corresponding to the neighborhood pixel points. />Is->Gray values corresponding to the first edge pixel points.
In the actual case of a device, in which the device,the smaller is, tend to indicate +.>Gray value +.>And the firstThe first edge pixel points are located on the straight line of the corresponding characteristic direction>First edge pixel pointIGray value corresponding to each neighborhood pixel point +. >The closer. Therefore, the smaller the difference between the neighboring pixel points of the first edge pixel point on the straight line where the characteristic direction corresponding to the first edge pixel point is located. The sum of gray differences between each adjacent pixel point of the first edge pixel point on the straight line of the characteristic direction corresponding to the first edge pixel point>The smaller the difference, the smaller the difference between each neighboring pixel point of the first edge pixel point on the straight line where the characteristic direction corresponding to the first edge pixel point is located. And is also provided withThe average level of the difference between each neighborhood pixel point of the first edge pixel point on the straight line where the first edge pixel point and the characteristic direction corresponding to the first edge pixel point are located can be represented, and the gray difference characteristic value corresponding to the first edge pixel point can be conveniently compared>
And a sixth substep, determining the characteristic direction and the gray difference characteristic value corresponding to each neighborhood pixel point on the edge arc line corresponding to the first edge pixel point.
The specific implementation manner of this substep may refer to the third substep to the fifth substep included in the second substep included in the step S3, and the neighborhood pixel point may be used as the first edge pixel point, and the obtained feature direction and the gray scale difference feature value corresponding to the first edge pixel point are the feature direction and the gray scale difference feature value corresponding to the neighborhood pixel point.
And a seventh substep, determining the possibility of the corrected edge point corresponding to the first edge pixel point according to the possibility of the preliminary edge point corresponding to the first edge pixel point, the characteristic direction and the gray difference characteristic value, and the characteristic direction and the gray difference characteristic value corresponding to each neighborhood pixel point on the edge arc line corresponding to the first edge pixel point.
Wherein the modified edge point likelihood may characterize a likelihood that the first edge pixel point is an actual edge pixel point.
For example, the formula corresponding to determining the probability of the corrected edge point corresponding to the first edge pixel point may be:
wherein, the liquid crystal display device comprises a liquid crystal display device,is->And the probability of correcting the edge point corresponding to the first edge pixel point. />Is->And the preliminary edge point possibility corresponding to the first edge pixel point. />Is->The number of neighborhood pixel points on the edge arc corresponding to the first edge pixel points./>Is->And the characteristic directions corresponding to the first edge pixel points. />Is->The first edge pixel point is corresponding to the first edge arc lineJAnd the characteristic directions corresponding to the neighborhood pixel points. />Is->Gray scale difference characteristic values corresponding to the first edge pixel points. />Is->The first edge pixel point is corresponding to the first edge arc line JGray difference characteristic values corresponding to the neighborhood pixel points. />Is a maximum function.
In actual condition, when the firstPreliminary edge point likelihood corresponding to the first edge pixel point>The greater the->The probability of the correction edge point corresponding to the first edge pixel point>Often the larger, i.e. +.>The greater the likelihood that the first edge pixel point is an actual edge pixel point. />Can characterize the%>The characteristic direction corresponding to the first edge pixel point is the first +.>The first edge pixel point is corresponding to the first edge arc lineJDifferences between the feature directions corresponding to the respective neighborhood pixels. />Can characterize the%>Gray difference characteristic values corresponding to the first edge pixel points and +.>The first edge pixel point is corresponding to the first edge arc lineJThe difference between the gray scale difference characteristic values corresponding to the neighborhood pixel points. When->Or (b)The smaller the ∈>First edge pixel and +.>The first edge pixel point is corresponding to the first edge arc lineJThe more similar the neighboring pixels are, i.e., +.>The greater the likelihood that the first edge pixel point is an actual edge pixel point. So thatOr->The greater the->The greater the likelihood that the first edge pixel point is an actual edge pixel point. Thus, comprehensively consider +. >Each neighborhood pixel point on the edge arc corresponding to the first edge pixel pointThe larger the->The greater the likelihood that the first edge pixel point is an actual edge pixel point. />Can make +.>Normalizing, let->Can be in the range of [0,1 ]]. And->Preliminary edge point likelihood corresponding to the first edge pixel point>The value range of (1) is (0, 1)]Thus, the->The probability of the correction edge point corresponding to the first edge pixel point>Can be in the range of [0,1 ]]The likelihood of the preliminary edge points corresponding to the first edge pixel points can be conveniently compared.
And an eighth sub-step of determining the first edge pixel point as an actual edge pixel point when the probability of the corrected edge point corresponding to the first edge pixel point is greater than a preset threshold value of the probability of the edge point.
The edge point likelihood threshold may be a maximum corrected edge point likelihood corresponding to the first edge pixel point when the first edge pixel point is not an actual edge pixel point. For example, the edge point likelihood threshold may be 0.8.
And thirdly, determining an actual circle center pixel point according to the bearing surface area image and the actual edge pixel point set.
The actual circle center pixel point may be a pixel point corresponding to the center point of the bearing to be detected.
For example, this step may include the sub-steps of:
the first substep, for each pixel point in the bearing surface area image, determines the circle center confidence corresponding to the pixel point according to the number of the actual edge pixel points in the actual edge pixel point set and the feature direction corresponding to the actual edge pixel points in the actual edge pixel point set.
The larger the circle center confidence corresponding to the pixel point is, the more likely the pixel point is an actual circle center pixel point.
For example, this sub-step may include the steps of:
first, when a straight line of the feature direction corresponding to the actual edge pixel point in the actual edge pixel point set passes through the pixel point, determining the actual edge pixel point as a reference edge pixel point corresponding to the pixel point.
And then, determining the number of the reference edge pixel points corresponding to the pixel points as the reference number corresponding to the pixel points.
And finally, determining the ratio of the reference number corresponding to the pixel points to the number of the actual edge pixel points in the actual edge pixel point set as the circle center confidence corresponding to the pixel points.
And a second substep, determining the pixel point with the largest corresponding circle center confidence in the bearing surface area image as the actual circle center pixel point.
In practical cases, the practical center pixel point may be passed by the straight line where the feature direction corresponding to the most practical edge pixel points is located. Therefore, the pixel point with the largest corresponding circle center confidence coefficient can be the actual circle center pixel point.
And step four, dividing the actual edge pixel points in the actual edge pixel point set according to the actual edge pixel point set and the actual circle center pixel points to obtain an actual edge pixel point class set.
The actual edge pixel point category set may include 6 actual edge pixel point categories. As shown in fig. 3, the actual set of edge pixel point categories may include: two actual edge pixel point categories corresponding to two edges of the inner ring region 301, two actual edge pixel point categories corresponding to two edges of the dust cap region 302, and two actual edge pixel point categories corresponding to two edges of the outer ring region 303.
For example, this step may include the sub-steps of:
and a first sub-step of determining the Euclidean distance between the actual edge pixel point and the actual circle center pixel point as the target distance corresponding to the actual edge pixel point according to each actual edge pixel point and the actual circle center pixel point in the actual edge pixel point set.
And a second sub-step of carrying out clustering division on the actual edge pixel points in the actual edge pixel point set according to the target distances corresponding to the actual edge pixel points in the actual edge pixel point set to obtain the actual edge pixel point class set.
For example, according to the target distance corresponding to each actual edge pixel point in the actual edge pixel point set, the actual edge pixel points in the actual edge pixel point set may be clustered and divided by a K-means mean value clustering algorithm (k=6), so as to obtain the actual edge pixel point class set.
Fifthly, determining the target region set and the region shape size defect degree set according to the actual edge pixel point type set and the actual circle center pixel points.
For example, this step may include the sub-steps of:
the first sub-step is to connect each actual edge pixel in each actual edge pixel category in the actual edge pixel category set to obtain an actual edge corresponding to the actual edge pixel category.
And a second sub-step of determining the area between the actual edges corresponding to the two adjacent actual edge pixel point categories in the actual edge pixel point category set as the target area corresponding to the two actual edge pixel point categories to obtain the target area set.
For example, first, the actual edge pixel point categories in the actual edge pixel point category set may be ordered according to the target distances corresponding to the actual edge pixel points in the actual edge pixel point categories, so as to obtain an actual edge pixel point category sequence. Then, an area between the first and second actual edges corresponding to the first and second actual edge pixel categories in the actual edge pixel category sequence may be determined as a target area. The area between the third and fourth actual edges in the actual edge pixel class sequence may be determined as the target area. The region between the actual edges corresponding to the fifth and sixth actual edge pixel categories in the actual edge pixel category sequence may be determined as the target region.
And a third sub-step of determining, for each target region in the target region set, a region shape dimension defect corresponding to the target region according to a euclidean distance between an actual edge pixel point in two actual edge pixel point categories corresponding to the target region and the actual circle center pixel point, the number of actual edge pixel points in an actual edge corresponding to the two actual edge pixel point categories corresponding to the target region, and two standard distances corresponding to the target region, which are acquired in advance.
The two standard distances corresponding to the target area can be Euclidean distances between two edges of the target area and the actual circle center pixel point when the target area is not defective.
For example, the formula for determining the region shape size defect corresponding to the target region may be:
wherein, the liquid crystal display device comprises a liquid crystal display device,is the region shape size defect level corresponding to the target region. />Is the number of actual edge pixels in the first of the two actual edge pixel categories corresponding to the target region. />Is the number of actual edge pixels in the second of the two actual edge pixel categories corresponding to the target region. />Is the first actual edge pixel point category in the two actual edge pixel point categories corresponding to the target areaxEuclidean distance between each actual edge pixel point and the actual circle center pixel point. />Is the second actual edge pixel point category of the two actual edge pixel point categories corresponding to the target areayEuclidean distance between each actual edge pixel point and the actual circle center pixel point. />Is the first of the two standard distances corresponding to the target area. / >Is the second of the two standard distances corresponding to the target area. The first standard distance may be a euclidean distance between an edge corresponding to a first one of two actual edge pixel categories corresponding to the target area and an actual center pixel when the target area is not defective. The second standard distance may be a euclidean distance between an edge corresponding to a second one of the two actual edge pixel categories corresponding to the target area and the actual center pixel when the target area is not defective.
In the actual case of a device, in which the device,or->The larger the difference between the Euclidean distance between the two edges corresponding to the target region and the actual circle center pixel point and the corresponding standard distance is often described as being larger. I.e. the shape and size of the target area are often abnormal. />The degree of abnormality in the shape and size of the target area can be characterized as compared to the standard. Therefore, the region shape size defect degree corresponding to the target region +.>The larger the target area, the greater the degree of abnormality in the shape and size of the target area tends to occur.
And S4, determining shape and size defect information corresponding to each target area in the target area set according to the target area set and the area shape and size defect degree set, and obtaining a shape and size defect information set.
In some embodiments, the shape size defect information corresponding to each target region in the target region set may be determined according to the target region set and the region shape size defect degree set, to obtain a shape size defect information set.
The shape size defect information in the shape size defect information set can represent whether the shape size of the target area is defective or not.
As an example, this step may include the steps of:
when the shape size defect degree of the area corresponding to the target area is larger than a preset shape size defect degree threshold value, generating shape size defect information representing that the shape size of the target area has defects, and taking the shape size defect information as the shape size defect information corresponding to the target area.
The shape size defect degree threshold may be a maximum shape size defect degree of the region when no defect occurs in the target region. For example, the shape size defect level threshold may be 0.9.
And a second step of generating shape size defect information indicating that the shape size of the target area has no defect when the shape size defect degree of the area corresponding to the target area is smaller than or equal to the shape size defect degree threshold value, as the shape size defect information corresponding to the target area.
And S5, generating dust cover surface defect information according to the dust cover area and the trained dust cover surface defect identification network.
In some embodiments, the dust cover surface defect information may be generated from the dust cover area and the trained dust cover surface defect identification network described above.
The dust cover area may be an area between actual edges corresponding to a third and fourth actual edge pixel point category in the actual edge pixel point category sequence.
As an example, this step may include the steps of:
the first step, according to the gray value corresponding to the pixel point in the dust cover area, clustering and dividing the pixel point in the dust cover area to obtain an abnormal pixel point type and a background pixel point type.
The abnormal pixel points in the abnormal pixel point category can be preset patterns, noise and defect points in the dust cover area. The preset pattern may be a preset pattern. For example, the predetermined pattern may be a letter. The background pixels in the background pixel category may be normal pixels in the dust cap area except for the preset pattern. There are often marks of a predetermined pattern on the dust cover of the bearing surface to be inspected.
For example, according to the gray value corresponding to the pixel point in the dust cover area, the pixel point in the dust cover area can be clustered and divided by a K-means mean value clustering algorithm (k=2) to obtain two pixel point categories. The pixel point type with the largest number of included pixels may be a background pixel point type. The pixel class with the least number of pixels included may be an outlier pixel class.
And step two, screening a non-noise abnormal point set from the abnormal pixel point categories.
Wherein the noise point may be an isolated outlier pixel. The non-noise outliers in the set of non-noise outliers may be outlier pixels other than noise points.
For example, whether an abnormal pixel point is a noise point may be determined by the euclidean distance between the abnormal pixel points.
And thirdly, determining a defect area set according to the non-noise abnormal point set.
For example, first, when the euclidean distance between two non-noise outliers is smaller than a distance threshold set in advance, the two non-noise outliers may be regarded as the same category. Wherein the distance threshold may characterize the maximum Euclidean distance between two non-noise outliers when they are not of the same class. For example, the distance threshold may be 0.1. Then, when the classification of the non-noise outlier in the non-noise outlier set is completed, the region in which the non-noise outlier in the same class is located is determined as a defective region.
Fourth, for each defect region in the defect region set, determining similarity between the defect region and each template image according to the defect region and each template image in the template image set acquired in advance, and obtaining a similarity set corresponding to the defect region.
The template image in the template image set may be an image of a preset pattern.
For example, for each defect region in the defect region set, template matching may be performed on each template image in the defect region and the template image set, and similarity between the defect region and each template image may be determined, so as to obtain a similarity set corresponding to the defect region.
And fifthly, determining the maximum similarity in the similarity set corresponding to each defect area in the defect area set as the target similarity corresponding to the defect area.
And sixthly, screening a target defect area set from the defect area set according to the target similarity corresponding to the defect areas in the defect area set.
For example, when the target similarity corresponding to the defective area is greater than a preset similarity threshold, the defective area may be an area corresponding to a preset pattern. The similarity threshold may represent a maximum target similarity corresponding to the defective area when the defective area is not an area corresponding to the preset pattern. For example, the similarity threshold may be 0.95. When the target similarity corresponding to the defective area is less than or equal to the similarity threshold, the defective area may be a target defective area.
And seventh, determining a gray level co-occurrence matrix corresponding to the target defect area according to each target defect area in the target defect area set.
The specific implementation manner of this step may be implemented in an existing manner, which is not described herein in detail.
And eighth step, determining a gray entropy value corresponding to the target defect area according to the gray co-occurrence matrix corresponding to each target defect area in the target defect area set.
The specific implementation manner of this step may be implemented in an existing manner, which is not described herein in detail.
And ninth, determining a background area according to the background pixel point category.
The background area may be an area where a background pixel point in the background pixel point category is located.
And tenth, determining the gray entropy value corresponding to the background area.
The specific implementation manner of this step may be implemented in an existing manner, which is not described herein in detail.
Eleventh step, determining the real initial defect probability corresponding to the target defect region according to the gray entropy value corresponding to the background region and the gray entropy value corresponding to each target defect region in the target defect region set.
Wherein the true initial defect probability may characterize a likelihood of the target defect region being a preliminary determination of the true defect region.
For example, the formula for determining the true initial defect probability corresponding to the target defect region may be:
wherein, the liquid crystal display device comprises a liquid crystal display device,Pis the true initial defect probability corresponding to the target defect area.eIs a natural constant.ENTIs the gray entropy value corresponding to the target defect area.Is the gray entropy value corresponding to the background area.
In the actual case of a device, in which the device,the smaller the target defect area is, the less likely it is that the target defect area is a true defect area tends to be. />Realize->Normalization is carried out, so that the real initial defect probability corresponding to the target defect region can be conveniently comparedP. Probability of true initial defectPThe larger the target defect area is, the greater the likelihood that the target defect area is a true defect area.
And twelfth, acquiring a symmetrical area corresponding to each target defect area in the target defect area set.
For example, as shown in fig. 4, the symmetric region corresponding to the target defect region 401 may be the symmetric region 402.
Thirteenth, for each target defect area in the target defect area set, determining a target correlation corresponding to the target defect area according to the target defect area and a symmetric area corresponding to the target defect area.
The target correlation corresponding to the target defect area may be a similarity between the target defect area and a symmetric area corresponding to the target defect area.
For example, the formula for determining the target correlation correspondence for the target defect area may be:
wherein, the liquid crystal display device comprises a liquid crystal display device,is the first of the target defect region setBTarget correlation corresponding to each target defect area.HIs determined by a matching algorithmBTarget defect area and the firstBProfile similarity between symmetric regions corresponding to each target defect region. />Is a maximum function. />Is the firstBThe number of pixels in the target defect area. />Is the firstBThe number of pixels in the symmetric region corresponding to the target defect region. />Is the firstBThe first target defect arearGray values corresponding to the pixel points. />Is the firstBThe first of the symmetrical areas corresponding to the target defect areasrGray values corresponding to the pixel points.
In practical situations, the dust cover area is a circular ring area, and due to symmetry of the circular ring, a normal area in the dust cover area is often similar to a symmetrical area corresponding to the normal area. Whereas the real defect area tends not to have symmetry, i.e. the real defect area and the symmetric area corresponding to the real defect area tend not to be similar. It is possible to judge whether the target defective area is a normal area or a true defective area by determining whether the target defective area is similar to a symmetrical area corresponding to the target defective area. First, the BTarget defect area and the firstBContour similarity between symmetric regions corresponding to each target defect regionHThe larger the firstBTarget correlation corresponding to each target defect regionThe larger. />Can characterize the firstBTarget defect area and the firstBThe degree of gray scale difference between symmetric regions corresponding to the respective target defect regions.The larger the bigger the firstBTarget defect area and the firstBThe smaller the gray scale difference degree between the symmetrical areas corresponding to the target defect areas, the firstBTarget relevance corresponding to the target defective area +.>The larger.
Fourteenth step, determining the real defect probability corresponding to the target defect region according to the real initial defect probability and the target correlation corresponding to each target defect region in the target defect region set.
Wherein the probability of a true defect corresponding to the target defect region may characterize the likelihood that the target defect region is a true defect region.
For example, the formula for determining the true defect probability corresponding to the target defect region may be:
wherein, the liquid crystal display device comprises a liquid crystal display device,is the first of the target defect region setBTrue defect probabilities corresponding to the target defect areas. />Is the first of the target defect region setBTrue initial defect probabilities corresponding to the target defect areas. / >Is the first of the target defect region setBTarget correlation corresponding to each target defect area. />Is a fractional number greater than 0. />The function of (2) is to prevent the denominator from being 0.
In actual case, the firstBTrue initial defect probability corresponding to each target defect regionThe larger or the firstBIndividual targetsTarget relevance for defective areas +.>The smaller the time, the firstBTrue defect probability corresponding to the target defect area +.>The larger.
Fifteenth, determining the target defect area as a real defect area when the real defect probability corresponding to the target defect area in the target defect area set is larger than a preset real defect probability threshold.
The true defect probability threshold may represent a maximum true defect probability corresponding to the target defect region when the target defect region is a normal region. For example, the true defect probability threshold may be 0.9.
Sixteenth, extracting features of the real defect area to obtain defect feature vectors corresponding to the real defect area.
The defect feature vector corresponding to the real defect area may include, but is not limited to: the area, perimeter, length, width, entropy, energy and contrast of the real defect region.
The specific implementation manner of this step may be implemented by using the prior art, and will not be described herein.
Seventeenth, inputting the defect feature vector corresponding to the real defect area into a dust cover surface defect recognition network, and determining the defect type corresponding to the real defect area through the dust cover surface defect recognition network.
The dust cover surface defect recognition network may be a classified neural network. The dust cover surface defect recognition network may be used to identify defect categories for the true defect areas. The defect class corresponding to the real defect area may be a defect class of the real defect area.
Eighteenth, generating the defect information of the dust cover surface according to the defect type corresponding to the real defect area.
The dust cover surface defect information may include defect types corresponding to each real defect area.
Optionally, the training process of the dust cover surface defect recognition network may include the following steps:
first, constructing a dust cover surface defect identification network.
The specific implementation manner of this step may be implemented by using the prior art, and will not be described herein.
And secondly, acquiring a sample defect area set.
Wherein the defect type corresponding to the sample defect area in the sample defect area set may be known.
And thirdly, extracting the characteristics of each sample defect region in the sample defect region set to obtain a defect characteristic vector corresponding to the sample defect region.
Wherein, the defect characteristic vector corresponding to the sample defect area may include, but is not limited to: area, perimeter, length, width, entropy, energy, and contrast of the sample defect region.
The specific implementation manner of this step may be implemented by using the prior art, and will not be described herein.
And fourthly, training the dust cover surface defect recognition network by utilizing the defect types and the defect feature vectors corresponding to the sample defect areas in the sample defect area set to obtain the trained dust cover surface defect recognition network.
Wherein the loss function of the dust cap surface defect identification network may be a cross entropy loss function.
The specific implementation manner of this step may be implemented by using the prior art, and will not be described herein.
And S6, generating bearing surface defect information representing the defect condition of the bearing to be detected according to the shape and size defect information set and the dust cover surface defect information.
In some embodiments, bearing surface defect information characterizing a defect condition of the bearing to be detected may be generated according to the set of shape size defect information and the dust cover surface defect information.
The bearing surface defect information can represent the defect condition of the bearing to be detected. The bearing surface defect information may include: a set of shape and size defect information and dust cap surface defect information.
Alternatively, in order to more accurately determine the defect condition of the bearing to be detected, first, images of the upper surface and the lower surface of the bearing to be detected may be acquired and grayed, to obtain two gray-scale images. And then, each gray level image in the two gray level images is used as a bearing surface gray level image, and the steps S2 to S6 are executed to obtain the bearing surface defect information corresponding to each gray level image, so that the defect condition of the upper surface and the lower surface of the bearing to be detected can be obtained.
According to the method for identifying the defects on the surface of the bearing, the material analysis and the test are carried out by utilizing an optical means, and particularly, the technical problem that the accuracy of identifying the defects on the surface of the bearing is low is solved, and the technical effect of improving the accuracy of identifying the defects on the surface of the bearing is achieved. Firstly, obtaining a bearing surface visible light image of a bearing to be detected, and graying the bearing surface visible light image to obtain a bearing surface gray image. The bearing surface visible light image is used for shooting the surface of the bearing to be detected, so that the subsequent analysis of the bearing surface visible light image can be facilitated, and the defect condition of the bearing to be detected can be determined. And then, carrying out bearing surface extraction, identification and segmentation on the bearing surface gray level image to obtain a bearing surface area image corresponding to the bearing surface area in the bearing surface gray level image. In practical situations, the visible light image of the bearing surface often captures not only the surface of the bearing to be detected, but also the platform area where the bearing to be detected is placed. For example, the platform area in which the bearing to be detected is placed may be a conveyor belt. Bearing surface areas are identified and segmented from the bearing surface gray scale image. It may be convenient to analyze only the bearing surface area later. The platform area for placing the bearing to be detected is not needed to be analyzed, so that the calculated amount is reduced, and the occupation of calculation resources is reduced. Then, carrying out defect identification on the bearing surface area image to obtain a target area set and an area shape size defect degree set, wherein the target area set comprises: a dust cap area. And then, determining the shape size defect information corresponding to each target area in the target area set according to the target area set and the area shape size defect degree set to obtain a shape size defect information set. In practical situations, the bearing to be detected often comprises a plurality of target areas, and the shape and the size of different target areas are often different, so that the accuracy of determining the shape and the size defect information corresponding to the target areas can be improved by determining each target area and accurately determining the shape and the size defect degree of the area corresponding to each target area. And then generating dust cover surface defect information according to the dust cover area and the trained dust cover surface defect identification network. In practice, since some preset patterns are often included in the dust cap area, these patterns are sometimes misjudged as defects. And the target area except the dust cover area in the target area set often does not comprise some preset patterns. Therefore, only the dust cover area needs to be subjected to further defect identification, and the accuracy and the efficiency of defect identification on the bearing surface are improved. And finally, generating bearing surface defect information representing the defect condition of the bearing to be detected according to the shape and size defect information set and the dust cover surface defect information. Therefore, the invention solves the technical problem of low accuracy of defect identification on the surface of the bearing by optical means, particularly by utilizing visible light means to perform material analysis and test, and has the technical effect of improving the accuracy of defect identification on the surface of the bearing.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application and are intended to be included within the scope of the application.

Claims (9)

1. A method for identifying bearing surface defects, comprising the steps of:
obtaining a bearing surface visible light image of a bearing to be detected, and graying the bearing surface visible light image to obtain a bearing surface gray image;
carrying out bearing surface extraction, identification and segmentation on the bearing surface gray level image to obtain a bearing surface area image corresponding to a bearing surface area in the bearing surface gray level image;
performing defect identification on the bearing surface area image to obtain a target area set and an area shape size defect degree set, wherein the target area set comprises: a dust cap area;
Determining shape and size defect information corresponding to each target area in the target area set according to the target area set and the area shape and size defect degree set to obtain a shape and size defect information set;
generating dust cover surface defect information according to the dust cover area and the trained dust cover surface defect identification network;
generating bearing surface defect information representing the defect condition of the bearing to be detected according to the shape and size defect information set and the dust cover surface defect information;
generating dust cover surface defect information according to the dust cover area and the trained dust cover surface defect identification network, including:
clustering and dividing the pixel points in the dust cover area according to the gray values corresponding to the pixel points in the dust cover area to obtain abnormal pixel point categories and background pixel point categories;
screening a non-noise abnormal point set from the abnormal pixel point category;
determining a defect area set according to the non-noise abnormal point set;
for each defect region in the defect region set, determining the similarity between the defect region and each template image according to the defect region and each template image in a template image set acquired in advance, and obtaining a similarity set corresponding to the defect region;
Determining the maximum similarity in a similarity set corresponding to each defect region in the defect region set as the target similarity corresponding to the defect region;
screening a target defect area set from the defect area set according to the target similarity corresponding to the defect area in the defect area set;
according to each target defect region in the target defect region set, determining a gray level co-occurrence matrix corresponding to the target defect region;
determining a gray entropy value corresponding to each target defect region in the target defect region set according to the gray co-occurrence matrix corresponding to each target defect region in the target defect region set;
determining a background area according to the background pixel point category;
determining a gray entropy value corresponding to the background area;
determining the real initial defect probability corresponding to the target defect region according to the gray entropy value corresponding to the background region and the gray entropy value corresponding to each target defect region in the target defect region set;
acquiring a symmetrical region corresponding to each target defect region in the target defect region set;
for each target defect region in the target defect region set, determining a target correlation corresponding to the target defect region according to the target defect region and a symmetrical region corresponding to the target defect region;
Determining the real defect probability corresponding to each target defect region in the target defect region set according to the real initial defect probability and the target correlation corresponding to the target defect region;
when the real defect probability corresponding to the target defect region in the target defect region set is larger than a preset real defect probability threshold, determining the target defect region as a real defect region;
extracting features of the real defect area to obtain defect feature vectors corresponding to the real defect area;
inputting the defect feature vector corresponding to the real defect area into a dust cover surface defect identification network, and determining the defect type corresponding to the real defect area through the dust cover surface defect identification network;
and generating the surface defect information of the dust cover according to the defect type corresponding to the real defect area.
2. The method for identifying defects on a bearing surface according to claim 1, wherein the step of identifying defects on the image of the bearing surface area to obtain a target area set and an area shape and size defect set comprises the steps of:
performing edge detection on the bearing surface area image to obtain a target edge pixel point set;
Screening out an actual edge pixel point set from the target edge pixel point set;
determining an actual circle center pixel point according to the bearing surface area image and the actual edge pixel point set;
dividing the actual edge pixel points in the actual edge pixel point set according to the actual edge pixel point set and the actual circle center pixel points to obtain an actual edge pixel point class set;
and determining the target region set and the region shape and size defect degree set according to the actual edge pixel point class set and the actual circle center pixel point.
3. The method of claim 2, wherein said screening out the actual set of edge pixels from the set of target edge pixels comprises:
when a neighborhood edge pixel point exists in a first neighborhood corresponding to a target edge pixel point in the target edge pixel point set, determining the target edge pixel point as a first edge pixel point, wherein the first neighborhood is a preset neighborhood, and the neighborhood edge pixel point is an edge pixel point in the neighborhood;
determining the possibility of the preliminary edge point corresponding to the first edge pixel point according to the gray value corresponding to the first edge pixel point and the gray value corresponding to each adjacent edge pixel point in a second adjacent area corresponding to the first edge pixel point, wherein the second adjacent area is a preset adjacent area;
Connecting neighborhood edge pixel points in a second neighborhood corresponding to the first edge pixel point to obtain an edge arc corresponding to the first edge pixel point;
making a vertical line of an edge arc line corresponding to the first edge pixel point by passing through the first edge pixel point, and determining a characteristic direction corresponding to the first edge pixel point;
determining a gray difference characteristic value corresponding to the first edge pixel point according to the gray value corresponding to each neighborhood pixel point and the gray value corresponding to the first edge pixel point on a straight line where the characteristic direction corresponding to the first edge pixel point is located, wherein the neighborhood pixel point is a pixel point in a second neighborhood;
determining characteristic directions and gray difference characteristic values corresponding to all neighborhood pixel points on an edge arc line corresponding to the first edge pixel point;
determining the possibility of the corrected edge point corresponding to the first edge pixel point according to the possibility of the preliminary edge point corresponding to the first edge pixel point, the characteristic direction and the gray scale difference characteristic value, and the characteristic direction and the gray scale difference characteristic value corresponding to each neighborhood pixel point on the edge arc line corresponding to the first edge pixel point;
and when the probability of the corrected edge point corresponding to the first edge pixel point is larger than a preset threshold value of the probability of the edge point, determining the first edge pixel point as an actual edge pixel point.
4. A method of identifying bearing surface imperfections as claimed in claim 3 wherein said determining actual center pixels from said bearing surface area image and said actual set of edge pixels comprises:
for each pixel point in the bearing surface area image, determining the circle center confidence corresponding to the pixel point according to the number of the actual edge pixel points in the actual edge pixel point set and the characteristic direction corresponding to the actual edge pixel points in the actual edge pixel point set;
and determining the pixel point with the maximum circle center confidence coefficient corresponding to the bearing surface area image as the actual circle center pixel point.
5. The method for identifying a bearing surface defect according to claim 4, wherein determining the circle center confidence corresponding to the pixel points according to the number of actual edge pixel points in the actual edge pixel point set and the feature direction corresponding to the actual edge pixel points in the actual edge pixel point set includes:
when a straight line of the characteristic direction corresponding to the actual edge pixel point in the actual edge pixel point set passes through the pixel point, determining the actual edge pixel point as a reference edge pixel point corresponding to the pixel point;
Determining the number of the reference edge pixel points corresponding to the pixel points as the reference number corresponding to the pixel points;
and determining the ratio of the reference number corresponding to the pixel points to the number of the actual edge pixel points in the actual edge pixel point set as the circle center confidence corresponding to the pixel points.
6. The method for identifying a bearing surface defect according to claim 2, wherein the dividing the actual edge pixel points in the actual edge pixel point set according to the actual edge pixel point set and the actual center pixel point to obtain an actual edge pixel point class set includes:
determining the Euclidean distance between the actual edge pixel point and the actual circle center pixel point as the target distance corresponding to the actual edge pixel point according to each actual edge pixel point and the actual circle center pixel point in the actual edge pixel point set;
and carrying out clustering division on the actual edge pixel points in the actual edge pixel point set according to the target distances corresponding to the actual edge pixel points in the actual edge pixel point set to obtain the actual edge pixel point class set.
7. The method of claim 2, wherein determining the target region set and the region shape size defect level set according to the actual edge pixel point class set and the actual center pixel point comprises:
connecting each actual edge pixel point in each actual edge pixel point category in the actual edge pixel point category set to obtain an actual edge corresponding to the actual edge pixel point category;
determining the area between the actual edges corresponding to two adjacent actual edge pixel point categories in the actual edge pixel point category set as a target area corresponding to the two actual edge pixel point categories, and obtaining the target area set;
and for each target region in the target region set, determining the shape and size defect degree of the region corresponding to the target region according to Euclidean distance between the actual edge pixel points in the two actual edge pixel point categories corresponding to the target region and the actual circle center pixel point, the number of the actual edge pixel points in the actual edge corresponding to the two actual edge pixel point categories corresponding to the target region, and the two standard distances corresponding to the target region, which are acquired in advance.
8. The method for identifying defects on a bearing surface according to claim 1, wherein determining shape and size defect information corresponding to each target region in the set of target regions according to the set of target regions and the set of region shape and size defects comprises:
when the shape and size defect degree of the region corresponding to the target region is larger than a preset shape and size defect degree threshold, generating shape and size defect information representing that the shape and size of the target region have defects, and taking the shape and size defect information as the shape and size defect information corresponding to the target region;
and when the shape and size defect degree of the region corresponding to the target region is smaller than or equal to the shape and size defect degree threshold, generating shape and size defect information representing that the shape and size of the target region are not defective, and taking the shape and size defect information as the shape and size defect information corresponding to the target region.
9. The method of claim 1, wherein the training process of the dust cap surface defect recognition network comprises:
constructing a dust cover surface defect identification network;
acquiring a sample defect area set, wherein the defect category corresponding to a sample defect area in the sample defect area set is known;
Extracting characteristics of each sample defect region in the sample defect region set to obtain a defect characteristic vector corresponding to the sample defect region;
and training the dust cover surface defect recognition network by utilizing the defect types and the defect feature vectors corresponding to each sample defect region in the sample defect region set to obtain a trained dust cover surface defect recognition network.
CN202310814579.4A 2023-07-05 2023-07-05 Bearing surface defect identification method Active CN116523922B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310814579.4A CN116523922B (en) 2023-07-05 2023-07-05 Bearing surface defect identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310814579.4A CN116523922B (en) 2023-07-05 2023-07-05 Bearing surface defect identification method

Publications (2)

Publication Number Publication Date
CN116523922A CN116523922A (en) 2023-08-01
CN116523922B true CN116523922B (en) 2023-10-20

Family

ID=87390741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310814579.4A Active CN116523922B (en) 2023-07-05 2023-07-05 Bearing surface defect identification method

Country Status (1)

Country Link
CN (1) CN116523922B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541832B (en) * 2024-01-04 2024-04-16 苏州镁伽科技有限公司 Abnormality detection method, abnormality detection system, electronic device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102636490A (en) * 2012-04-12 2012-08-15 江南大学 Method for detecting surface defects of dustproof cover of bearing based on machine vision
CN103868924A (en) * 2012-12-18 2014-06-18 江南大学 Bearing appearance defect detecting algorithm based on visual sense
CN115351598A (en) * 2022-10-17 2022-11-18 南通钜德智能科技有限公司 Numerical control machine tool bearing detection method
CN116128873A (en) * 2023-04-04 2023-05-16 山东金帝精密机械科技股份有限公司 Bearing retainer detection method, device and medium based on image recognition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102636490A (en) * 2012-04-12 2012-08-15 江南大学 Method for detecting surface defects of dustproof cover of bearing based on machine vision
CN103868924A (en) * 2012-12-18 2014-06-18 江南大学 Bearing appearance defect detecting algorithm based on visual sense
CN115351598A (en) * 2022-10-17 2022-11-18 南通钜德智能科技有限公司 Numerical control machine tool bearing detection method
CN116128873A (en) * 2023-04-04 2023-05-16 山东金帝精密机械科技股份有限公司 Bearing retainer detection method, device and medium based on image recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
轴承防尘盖缺陷机器视觉自动检测方法研究;郝勇 等;测控技术;第第39卷卷(第第1期期);第80-85页 *

Also Published As

Publication number Publication date
CN116523922A (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN107543828B (en) Workpiece surface defect detection method and system
CN107545239B (en) Fake plate detection method based on license plate recognition and vehicle characteristic matching
CN110148130B (en) Method and device for detecting part defects
CN113724231B (en) Industrial defect detection method based on semantic segmentation and target detection fusion model
CN108520514B (en) Consistency detection method for electronic elements of printed circuit board based on computer vision
CN109190623B (en) Method for identifying brand and model of projector
CN116523922B (en) Bearing surface defect identification method
CN116993744B (en) Weld defect detection method based on threshold segmentation
CN113592828B (en) Nondestructive testing method and system based on industrial endoscope
US20200285917A1 (en) Image classification method, computer device and medium
CN106815830B (en) Image defect detection method
CN116152242B (en) Visual detection system of natural leather defect for basketball
CN113781391A (en) Image defect detection method and related equipment
CN115841488B (en) PCB hole inspection method based on computer vision
CN115690670A (en) Intelligent identification method and system for wafer defects
CN108182691B (en) Method and device for identifying speed limit sign and vehicle
Ghosh et al. Counterfeit IC detection by image texture analysis
CN110348307B (en) Path edge identification method and system for crane metal structure climbing robot
CN117094975A (en) Method and device for detecting surface defects of steel and electronic equipment
CN111950559A (en) Pointer instrument automatic reading method based on radial gray scale
CN114693651A (en) Rubber ring flow mark detection method and device based on image processing
US20160283821A1 (en) Image processing method and system for extracting distorted circular image elements
CN116363136B (en) On-line screening method and system for automatic production of motor vehicle parts
CN116310424B (en) Equipment quality assessment method, device, terminal and medium based on image recognition
CN112345534A (en) Vision-based bubble plate particle defect detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant